doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1511.06342 | 3 | Although the DQN maintains the same network architecture and hyperparameters for all games, the approach is limited in the fact that each network only learns how to play a single game at a time, despite the existence of similarities between games. For example, the tennis-like game of pong and the squash-like game of breakout are both similar in that each game consists of trying to hit a moving ball with a rectangular paddle. A network trained to play multiple games would be able to generalize its knowledge between the games, achieving a single compact state representation as the inter-task similarities are exploited by the network. Having been trained on enough source tasks, the multitask network can also exhibit transfer to new target tasks, which can speed up learning. Training DRL agents can be extremely computationally intensive and therefore reducing training time is a signiï¬cant practical beneï¬t.
1
Published as a conference paper at ICLR 2016 | 1511.06342#3 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06434 | 3 | 2 RELATED WORK
2.1 REPRESENTATION LEARNING FROM UNLABELED DATA
Unsupervised representation learning is a fairly well studied problem in general computer vision research, as well as in the context of images. A classic approach to unsupervised representation learning is to do clustering on the data (for example using K-means), and leverage the clusters for improved classiï¬cation scores. In the context of images, one can do hierarchical clustering of image patches (Coates & Ng, 2012) to learn powerful image representations. Another popular method is to train auto-encoders (convolutionally, stacked (Vincent et al., 2010), separating the what and where components of the code (Zhao et al., 2015), ladder structures (Rasmus et al., 2015)) that encode an image into a compact code, and decode the code to reconstruct the image as accurately as possible. These methods have also been shown to learn good feature representations from image pixels. Deep belief networks (Lee et al., 2009) have also been shown to work well in learning hierarchical representations.
2.2 GENERATING NATURAL IMAGES
Generative image models are well studied and fall into two categories: parametric and non- parametric. | 1511.06434#3 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | In recent years, supervised learning with convolutional networks (CNNs) has
seen huge adoption in computer vision applications. Comparatively, unsupervised
learning with CNNs has received less attention. In this work we hope to help
bridge the gap between the success of CNNs for supervised learning and
unsupervised learning. We introduce a class of CNNs called deep convolutional
generative adversarial networks (DCGANs), that have certain architectural
constraints, and demonstrate that they are a strong candidate for unsupervised
learning. Training on various image datasets, we show convincing evidence that
our deep convolutional adversarial pair learns a hierarchy of representations
from object parts to scenes in both the generator and discriminator.
Additionally, we use the learned features for novel tasks - demonstrating their
applicability as general image representations. | http://arxiv.org/pdf/1511.06434 | Alec Radford, Luke Metz, Soumith Chintala | cs.LG, cs.CV | Under review as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1505.00853"
},
{
"id": "1502.03167"
},
{
"id": "1502.04623"
},
{
"id": "1506.02351"
},
{
"id": "1506.03365"
},
{
"id": "1509.01240"
},
{
"id": "1503.03585"
},
{
"id": "1511.01844"
},
{
"id": "1506.05751"
},
{
"id": "1507.02672"
},
{
"id": "1510.02795"
}
] |
1511.06279 | 4 | Ae SSS Pi y | ! u + 2 2 AOD) ACT@ZWRITE) ADDI) ââCARRY)~â«ACT@LEFT) âcaRRY) AcT@nwere)
Figure 2: Example execu- tion trace of single-digit addi- tion. The task is to perform a single-digit add on the num- bers at pointer locations in the ï¬rst two rows. The carry (row 3) and output (row 4) should be updated to reï¬ect the addi- tion. At each time step, an ob- servation of the environment (viewed from each pointer on a scratch pad) is encoded into a ï¬xed-length vector.
By using neural networks to represent the subprograms and learning these from data, the approach can generalize on tasks involving rich perceptual inputs and uncertainty.
We may envision two approaches to provide supervision. In one, we provide a very large number of labeled examples, as in object recognition, speech and machine translation. In the other, the approached followed in this paper, the aim is to provide far fewer labeled examples, but where the labels contain richer information allowing the model to learn compositional structure. While unsupervised and reinforcement learning play important roles in perception and motor control, other cognitive abilities are possible thanks to rich supervision and curriculum learning. This is indeed the reason for sending our children to school. | 1511.06279#4 | Neural Programmer-Interpreters | We propose the neural programmer-interpreter (NPI): a recurrent and
compositional neural network that learns to represent and execute programs. NPI
has three learnable components: a task-agnostic recurrent core, a persistent
key-value program memory, and domain-specific encoders that enable a single NPI
to operate in multiple perceptually diverse environments with distinct
affordances. By learning to compose lower-level programs to express
higher-level programs, NPI reduces sample complexity and increases
generalization ability compared to sequence-to-sequence LSTMs. The program
memory allows efficient learning of additional tasks by building on existing
programs. NPI can also harness the environment (e.g. a scratch pad with
read-write pointers) to cache intermediate results of computation, lessening
the long-term memory burden on recurrent hidden units. In this work we train
the NPI with fully-supervised execution traces; each program has example
sequences of calls to the immediate subprograms conditioned on the input.
Rather than training on a huge number of relatively weak labels, NPI learns
from a small number of rich examples. We demonstrate the capability of our
model to learn several types of compositional programs: addition, sorting, and
canonicalizing 3D models. Furthermore, a single NPI learns to execute these
programs and all 21 associated subprograms. | http://arxiv.org/pdf/1511.06279 | Scott Reed, Nando de Freitas | cs.LG, cs.NE | ICLR 2016 conference submission | null | cs.LG | 20151119 | 20160229 | [
{
"id": "1511.04834"
},
{
"id": "1505.00521"
},
{
"id": "1511.08228"
},
{
"id": "1511.07275"
},
{
"id": "1511.06392"
}
] |
1511.06297 | 4 | the net effect can be viewed as a form of regularization of the main network, as the approximator has to use only a small fraction of the possible parameters in order to produce an action.
In this paper, we explore the formulation of conditional computation using reinforcement learning. We propose to learn input-dependent activation probabilities for every node (or blocks of nodes), while trying to jointly minimize the prediction errors at the output and the number of participating nodes at every layer, thus reducing the computational load. One can also think of our method as being related to standard dropout, which has been used as a tool to both regularize and speed up the computation. However, we emphasize that dropout is in fact a form of âunconditionalâ computation, in which the computation paths are data-independent. Therefore, usual dropout is less likely to lead to specialized computation paths within a network.
We present the problem formulation, and our solution to the proposed optimization problem, us- ing policy search methods (Deisenroth et al., 2013). Preliminary results are included for standard classiï¬cation benchmarks.
# 2 PROBLEM FORMULATION
Our model consists in a typical fully-connected neural network model, joined with stochastic per- layer policies that activate or deactivate nodes of the neural network in an input-dependent manner, both at train and test time. The exact algorithm is detailed in appendix A. | 1511.06297#4 | Conditional Computation in Neural Networks for faster models | Deep learning has become the state-of-art tool in many applications, but the
evaluation and training of deep models can be time-consuming and
computationally expensive. The conditional computation approach has been
proposed to tackle this problem (Bengio et al., 2013; Davis & Arel, 2013). It
operates by selectively activating only parts of the network at a time. In this
paper, we use reinforcement learning as a tool to optimize conditional
computation policies. More specifically, we cast the problem of learning
activation-dependent policies for dropping out blocks of units as a
reinforcement learning problem. We propose a learning scheme motivated by
computation speed, capturing the idea of wanting to have parsimonious
activations while maintaining prediction accuracy. We apply a policy gradient
algorithm for learning policies that optimize this loss function and propose a
regularization mechanism that encourages diversification of the dropout policy.
We present encouraging empirical results showing that this approach improves
the speed of computation without impacting the quality of the approximation. | http://arxiv.org/pdf/1511.06297 | Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, Doina Precup | cs.LG | ICLR 2016 submission, revised | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1502.01852"
},
{
"id": "1502.04623"
},
{
"id": "1502.03044"
}
] |
1511.06342 | 4 | 1
Published as a conference paper at ICLR 2016
The contribution of this paper is to develop and evaluate methods that enable multitask and trans- fer learning for DRL agents, using the ALE as a test environment. To ï¬rst accomplish multitask learning, we design a method called âActor-Mimicâ that leverages techniques from model compres- sion to train a single multitask network using guidance from a set of game-speciï¬c expert networks. The particular form of guidance can vary, and several different approaches are explored and tested empirically. To then achieve transfer learning, we treat a multitask network as being a DQN which was pre-trained on a set of source tasks. We show experimentally that this multitask pre-training can result in a DQN that learns a target task signiï¬cantly faster than a DQN starting from a random initialization, effectively demonstrating that the source task representations generalize to the target task.
# 2 BACKGROUND: DEEP REINFORCEMENT LEARNING | 1511.06342#4 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06279 | 5 | An advantage of our approach to model building and training is that the learned programs exhibit strong generalization. Speciï¬cally, when trained to sort sequences of up to twenty numbers in length, they can sort much longer sequences at test time. In contrast, the experiments will show that more standard sequence to sequence LSTMs only exhibit weak generalization, see Figure 6.
A trained NPI with ï¬xed parameters and a learned library of programs, can act both as an interpreter and as a programmer. As an interpreter, it takes input in the form of a program embedding and input data and subsequently executes the program. As a programmer, it uses samples drawn from a new task to generate a new program embedding that can be added to its library of programs.
# 2 RELATED WORK
Several ideas related to our approach have a long history. For example, the idea of using dynam- ically programmable networks in which the activations of one network become the weights (the
2
Published as a conference paper at ICLR 2016 | 1511.06279#5 | Neural Programmer-Interpreters | We propose the neural programmer-interpreter (NPI): a recurrent and
compositional neural network that learns to represent and execute programs. NPI
has three learnable components: a task-agnostic recurrent core, a persistent
key-value program memory, and domain-specific encoders that enable a single NPI
to operate in multiple perceptually diverse environments with distinct
affordances. By learning to compose lower-level programs to express
higher-level programs, NPI reduces sample complexity and increases
generalization ability compared to sequence-to-sequence LSTMs. The program
memory allows efficient learning of additional tasks by building on existing
programs. NPI can also harness the environment (e.g. a scratch pad with
read-write pointers) to cache intermediate results of computation, lessening
the long-term memory burden on recurrent hidden units. In this work we train
the NPI with fully-supervised execution traces; each program has example
sequences of calls to the immediate subprograms conditioned on the input.
Rather than training on a huge number of relatively weak labels, NPI learns
from a small number of rich examples. We demonstrate the capability of our
model to learn several types of compositional programs: addition, sorting, and
canonicalizing 3D models. Furthermore, a single NPI learns to execute these
programs and all 21 associated subprograms. | http://arxiv.org/pdf/1511.06279 | Scott Reed, Nando de Freitas | cs.LG, cs.NE | ICLR 2016 conference submission | null | cs.LG | 20151119 | 20160229 | [
{
"id": "1511.04834"
},
{
"id": "1505.00521"
},
{
"id": "1511.08228"
},
{
"id": "1511.07275"
},
{
"id": "1511.06392"
}
] |
1511.06297 | 5 | We cast the problem of learning the input-dependent activation probabilities at each layer in the framework of Markov Decision Processes (MDP) (Puterman, 1994). We define a discrete time, continuous state and discrete action MDP (S,U, P(-|s,u),C). An action u ⬠{0,1}* in this model consists in the application of a mask over the units of a given layer. We define the state space of the MDP over the vector-valued activations s ⬠R* of all nodes at the previous layer. The cost C is the loss of the neural network architecture (in our case the negative log-likelihood). This MDP is single-step: an input is seen, an action is taken, a reward is observed and we are at the end state.
Similarly to the way dropout is described (Hinton et al., 2012), each node or block in a given layer has an associated Bernoulli distribution which determines its probability of being activated. We train a different policy for each layer l, and parameterize it (separately of the neural network) such that it is input-dependent. For every layer l of k units, we deï¬ne a policy as a k-dimensional Bernoulli distribution: | 1511.06297#5 | Conditional Computation in Neural Networks for faster models | Deep learning has become the state-of-art tool in many applications, but the
evaluation and training of deep models can be time-consuming and
computationally expensive. The conditional computation approach has been
proposed to tackle this problem (Bengio et al., 2013; Davis & Arel, 2013). It
operates by selectively activating only parts of the network at a time. In this
paper, we use reinforcement learning as a tool to optimize conditional
computation policies. More specifically, we cast the problem of learning
activation-dependent policies for dropping out blocks of units as a
reinforcement learning problem. We propose a learning scheme motivated by
computation speed, capturing the idea of wanting to have parsimonious
activations while maintaining prediction accuracy. We apply a policy gradient
algorithm for learning policies that optimize this loss function and propose a
regularization mechanism that encourages diversification of the dropout policy.
We present encouraging empirical results showing that this approach improves
the speed of computation without impacting the quality of the approximation. | http://arxiv.org/pdf/1511.06297 | Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, Doina Precup | cs.LG | ICLR 2016 submission, revised | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1502.01852"
},
{
"id": "1502.04623"
},
{
"id": "1502.03044"
}
] |
1511.06342 | 5 | # 2 BACKGROUND: DEEP REINFORCEMENT LEARNING
A Markov Decision Process (MDP) is defined as a tuple (S, A, T, R, 7) where S is a set of states, A is a set of actions, T(sâ|s, a) is the transition probability of ending up in state sâ when executing action a in state s, R is the reward function mapping states in S to rewards in R, and Â¥ is a discount factor. An agentâs behaviour in an MDP is represented as a policy 7(a|s) which defines the probability of executing action a in state s. For a given policy, we can further define the Q-value function Q*(s,a) = E[v}49 y'r1|80 = 8, a0 = a] where H is the step when the game ends. The Q-function represents the expected future discounted reward when starting in a state s, executing a, and then following policy 7 until a terminating state is reached. There always exists at least one optimal state-action value function, Q*(s, a), such that Vs ⬠S,a ⬠A, Q*(s,a) = max, Qâ¢(s, a) (Sutton & Barto} |T998). The optimal Q-function can be rewritten as a Bellman equation: | 1511.06342#5 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06434 | 5 | Parametric models for generating images has been explored extensively (for example on MNIST digits or for texture synthesis (Portilla & Simoncelli, 2000)). However, generating natural images of the real world have had not much success until recently. A variational sampling approach to generating images (Kingma & Welling, 2013) has had some success, but the samples often suffer from being blurry. Another approach generates images using an iterative forward diffusion process (Sohl-Dickstein et al., 2015). Generative Adversarial Networks (Goodfellow et al., 2014) generated images suffering from being noisy and incomprehensible. A laplacian pyramid extension to this approach (Denton et al., 2015) showed higher quality images, but they still suffered from the objects looking wobbly because of noise introduced in chaining multiple models. A recurrent network approach (Gregor et al., 2015) and a deconvolution network approach (Dosovitskiy et al., 2014) have also recently had some success with generating natural images. However, they have not leveraged the generators for supervised tasks.
2.3 VISUALIZING THE INTERNALS OF CNNS | 1511.06434#5 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | In recent years, supervised learning with convolutional networks (CNNs) has
seen huge adoption in computer vision applications. Comparatively, unsupervised
learning with CNNs has received less attention. In this work we hope to help
bridge the gap between the success of CNNs for supervised learning and
unsupervised learning. We introduce a class of CNNs called deep convolutional
generative adversarial networks (DCGANs), that have certain architectural
constraints, and demonstrate that they are a strong candidate for unsupervised
learning. Training on various image datasets, we show convincing evidence that
our deep convolutional adversarial pair learns a hierarchy of representations
from object parts to scenes in both the generator and discriminator.
Additionally, we use the learned features for novel tasks - demonstrating their
applicability as general image representations. | http://arxiv.org/pdf/1511.06434 | Alec Radford, Luke Metz, Soumith Chintala | cs.LG, cs.CV | Under review as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1505.00853"
},
{
"id": "1502.03167"
},
{
"id": "1502.04623"
},
{
"id": "1506.02351"
},
{
"id": "1506.03365"
},
{
"id": "1509.01240"
},
{
"id": "1503.03585"
},
{
"id": "1511.01844"
},
{
"id": "1506.05751"
},
{
"id": "1507.02672"
},
{
"id": "1510.02795"
}
] |
1511.06279 | 6 | 2
Published as a conference paper at ICLR 2016
program) of a second network was mentioned in the Sigma-Pi units section of the inï¬uential PDP paper (Rumelhart et al., 1986). This idea appeared in (Sutskever & Hinton, 2009) in the context of learning higher order symbolic relations and in (Donnarumma et al., 2015) as the key ingredient of an architecture for prefrontal cognitive control. Schmidhuber (1992) proposed a related meta-learning idea, whereby one learns the parameters of a slowly changing network, which in turn generates context dependent weight changes for a second rapidly changing network. These approaches have only been demonstrated in very limited settings. In cognitive science, several theories of brain areas controlling other brain parts so as to carry out multiple tasks have been proposed; see for example Schneider & Chein (2003); Anderson (2010) and Donnarumma et al. (2012). | 1511.06279#6 | Neural Programmer-Interpreters | We propose the neural programmer-interpreter (NPI): a recurrent and
compositional neural network that learns to represent and execute programs. NPI
has three learnable components: a task-agnostic recurrent core, a persistent
key-value program memory, and domain-specific encoders that enable a single NPI
to operate in multiple perceptually diverse environments with distinct
affordances. By learning to compose lower-level programs to express
higher-level programs, NPI reduces sample complexity and increases
generalization ability compared to sequence-to-sequence LSTMs. The program
memory allows efficient learning of additional tasks by building on existing
programs. NPI can also harness the environment (e.g. a scratch pad with
read-write pointers) to cache intermediate results of computation, lessening
the long-term memory burden on recurrent hidden units. In this work we train
the NPI with fully-supervised execution traces; each program has example
sequences of calls to the immediate subprograms conditioned on the input.
Rather than training on a huge number of relatively weak labels, NPI learns
from a small number of rich examples. We demonstrate the capability of our
model to learn several types of compositional programs: addition, sorting, and
canonicalizing 3D models. Furthermore, a single NPI learns to execute these
programs and all 21 associated subprograms. | http://arxiv.org/pdf/1511.06279 | Scott Reed, Nando de Freitas | cs.LG, cs.NE | ICLR 2016 conference submission | null | cs.LG | 20151119 | 20160229 | [
{
"id": "1511.04834"
},
{
"id": "1505.00521"
},
{
"id": "1511.08228"
},
{
"id": "1511.07275"
},
{
"id": "1511.06392"
}
] |
1511.06297 | 6 | k x (us) = Il ot (lâo)t-), oi = [sigm(Zs + d)};, (1) i=1
where the Ïi denotes the participation probability, to be computed from the activations s of the layer below and the parameters θl = {Z(l), d(l)}. We denote the sigmoid function by sigm, the weight matrix by Z, and the bias vector by d. The output of a typical hidden layer h(x) that uses this policy is multiplied element-wise with the mask u sampled from the probabilities Ï, and becomes (h(x) â u). For clarity we did not superscript u, s and Ïi with l, but each layer has its own.
# 3 LEARNING SIGMOID-BERNOULLI POLICIES
We use REINFORCE (Williams, 1992) (detailed in appendix B) to learn the parameters ÎÏ = {θ1, ..., θL} of the sigmoid-Bernoulli policies. Since the nature of the observation space changes at each decision step, we learn L disjoint policies (one for each layer l of the deep network). As a consequence, the summation in the policy gradient disappears and becomes:
Vol = E{C(x)Vo, log a (u |s)} Q) | 1511.06297#6 | Conditional Computation in Neural Networks for faster models | Deep learning has become the state-of-art tool in many applications, but the
evaluation and training of deep models can be time-consuming and
computationally expensive. The conditional computation approach has been
proposed to tackle this problem (Bengio et al., 2013; Davis & Arel, 2013). It
operates by selectively activating only parts of the network at a time. In this
paper, we use reinforcement learning as a tool to optimize conditional
computation policies. More specifically, we cast the problem of learning
activation-dependent policies for dropping out blocks of units as a
reinforcement learning problem. We propose a learning scheme motivated by
computation speed, capturing the idea of wanting to have parsimonious
activations while maintaining prediction accuracy. We apply a policy gradient
algorithm for learning policies that optimize this loss function and propose a
regularization mechanism that encourages diversification of the dropout policy.
We present encouraging empirical results showing that this approach improves
the speed of computation without impacting the quality of the approximation. | http://arxiv.org/pdf/1511.06297 | Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, Doina Precup | cs.LG | ICLR 2016 submission, revised | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1502.01852"
},
{
"id": "1502.04623"
},
{
"id": "1502.03044"
}
] |
1511.06342 | 6 | Q*(s,a) = E r+7-max Q*(sâ,aâ)|. (1) s'NT(-|s,a) aeA
An optimal policy can be constructed from the optimal Q-function by choosing, for a given state, the action with highest Q-value. Q-learning, a reinforcement learning algorithm, uses iterative backups of the Q-function to converge towards the optimal Q-function. Using a tabular representation of the Q-function, this is equivalent to setting Q("*))(s,a) = Es xT (-|s,a)[7 + *Maxaca Qâ¢(s!,aâ)] for the (n+1)th update step (Sutton & Barto}|I Because the state space in the ALE is too large to tractably store a tabular representation of the Q-function, the Deep Q-Network (DQN) approach uses a deep function approximator to represent the state-action value function (Mnih et al.||2015). To train a DQN on the (n+1)th step, we set the networkâs loss to
LOAD (9) = E 8,a,7,8'~M(-) 2 (r-+9 nay a(e'sa's0) â Qs, 4; gon) | » Q) acA | 1511.06342#6 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06434 | 6 | 2.3 VISUALIZING THE INTERNALS OF CNNS
One constant criticism of using neural networks has been that they are black-box methods, with little understanding of what the networks do in the form of a simple human-consumable algorithm. In the context of CNNs, Zeiler et. al. (Zeiler & Fergus, 2014) showed that by using deconvolutions and ï¬ltering the maximal activations, one can ï¬nd the approximate purpose of each convolution ï¬lter in the network. Similarly, using a gradient descent on the inputs lets us inspect the ideal image that activates certain subsets of ï¬lters (Mordvintsev et al.).
# 3 APPROACH AND MODEL ARCHITECTURE
Historical attempts to scale up GANs using CNNs to model images have been unsuccessful. This motivated the authors of LAPGAN (Denton et al., 2015) to develop an alternative approach to it- eratively upscale low resolution generated images which can be modeled more reliably. We also encountered difï¬culties attempting to scale GANs using CNN architectures commonly used in the supervised literature. However, after extensive model exploration we identiï¬ed a family of archi2
# Under review as a conference paper at ICLR 2016 | 1511.06434#6 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | In recent years, supervised learning with convolutional networks (CNNs) has
seen huge adoption in computer vision applications. Comparatively, unsupervised
learning with CNNs has received less attention. In this work we hope to help
bridge the gap between the success of CNNs for supervised learning and
unsupervised learning. We introduce a class of CNNs called deep convolutional
generative adversarial networks (DCGANs), that have certain architectural
constraints, and demonstrate that they are a strong candidate for unsupervised
learning. Training on various image datasets, we show convincing evidence that
our deep convolutional adversarial pair learns a hierarchy of representations
from object parts to scenes in both the generator and discriminator.
Additionally, we use the learned features for novel tasks - demonstrating their
applicability as general image representations. | http://arxiv.org/pdf/1511.06434 | Alec Radford, Luke Metz, Soumith Chintala | cs.LG, cs.CV | Under review as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1505.00853"
},
{
"id": "1502.03167"
},
{
"id": "1502.04623"
},
{
"id": "1506.02351"
},
{
"id": "1506.03365"
},
{
"id": "1509.01240"
},
{
"id": "1503.03585"
},
{
"id": "1511.01844"
},
{
"id": "1506.05751"
},
{
"id": "1507.02672"
},
{
"id": "1510.02795"
}
] |
1511.06279 | 7 | Related problems have been studied in the literature on hierarchical reinforcement learning (e.g., Dietterich (2000); Andre & Russell (2001); Sutton et al. (1999) and Schaul et al. (2015)), imitation and apprenticeship learning (e.g., Kolter et al. (2008) and Rothkopf & Ballard (2013)) and elicita- tion of options through human interaction (Subramanian et al., 2011). These ideas have held great promise, but have not enjoyed signiï¬cant impact. We believe the recurrent compositional neural representations proposed in this paper could help these approaches in the future, and in particular in overcoming feature engineering. | 1511.06279#7 | Neural Programmer-Interpreters | We propose the neural programmer-interpreter (NPI): a recurrent and
compositional neural network that learns to represent and execute programs. NPI
has three learnable components: a task-agnostic recurrent core, a persistent
key-value program memory, and domain-specific encoders that enable a single NPI
to operate in multiple perceptually diverse environments with distinct
affordances. By learning to compose lower-level programs to express
higher-level programs, NPI reduces sample complexity and increases
generalization ability compared to sequence-to-sequence LSTMs. The program
memory allows efficient learning of additional tasks by building on existing
programs. NPI can also harness the environment (e.g. a scratch pad with
read-write pointers) to cache intermediate results of computation, lessening
the long-term memory burden on recurrent hidden units. In this work we train
the NPI with fully-supervised execution traces; each program has example
sequences of calls to the immediate subprograms conditioned on the input.
Rather than training on a huge number of relatively weak labels, NPI learns
from a small number of rich examples. We demonstrate the capability of our
model to learn several types of compositional programs: addition, sorting, and
canonicalizing 3D models. Furthermore, a single NPI learns to execute these
programs and all 21 associated subprograms. | http://arxiv.org/pdf/1511.06279 | Scott Reed, Nando de Freitas | cs.LG, cs.NE | ICLR 2016 conference submission | null | cs.LG | 20151119 | 20160229 | [
{
"id": "1511.04834"
},
{
"id": "1505.00521"
},
{
"id": "1511.08228"
},
{
"id": "1511.07275"
},
{
"id": "1511.06392"
}
] |
1511.06297 | 7 | Vol = E{C(x)Vo, log a (u |s)} Q)
since θl = {Z(l), d(l)} only appears in the l-th decision stage and the gradient is zero otherwise.
Estimating (2) from samples requires propagating through many instances at a time, which we achieve through mini-batches of size mb . Under the mini-batch setting, s(l) becomes a matrix and Ï(· | ·) a vector of dimension mb . Taking the gradient of the parameters with respect to the
2
# Under review as a conference paper at ICLR 2016
log action probabilities can then be seen as forming a Jacobian. We can thus re-write the empirical average in matrix form:
my 1 Val & â So Oxi) Vo, loge (ul | 80) = âe" Va, loge (UM | SO 3 £5, DCC) Vo own (ul? [9\?) = Foe" Vo logx (U9 18) â)
where C(x;) is the total cost for input x; and mp is the number of examples in the mini-batch. The term c! denotes the row vector containing the total costs for every example in the mini-batch.
# 3.1 FAST VECTOR-JACOBIAN MULTIPLICATION | 1511.06297#7 | Conditional Computation in Neural Networks for faster models | Deep learning has become the state-of-art tool in many applications, but the
evaluation and training of deep models can be time-consuming and
computationally expensive. The conditional computation approach has been
proposed to tackle this problem (Bengio et al., 2013; Davis & Arel, 2013). It
operates by selectively activating only parts of the network at a time. In this
paper, we use reinforcement learning as a tool to optimize conditional
computation policies. More specifically, we cast the problem of learning
activation-dependent policies for dropping out blocks of units as a
reinforcement learning problem. We propose a learning scheme motivated by
computation speed, capturing the idea of wanting to have parsimonious
activations while maintaining prediction accuracy. We apply a policy gradient
algorithm for learning policies that optimize this loss function and propose a
regularization mechanism that encourages diversification of the dropout policy.
We present encouraging empirical results showing that this approach improves
the speed of computation without impacting the quality of the approximation. | http://arxiv.org/pdf/1511.06297 | Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, Doina Precup | cs.LG | ICLR 2016 submission, revised | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1502.01852"
},
{
"id": "1502.04623"
},
{
"id": "1502.03044"
}
] |
1511.06342 | 7 | where M(-) is a uniform probability distribution over a replay memory, which is a set of the m previous (s,a,7,sâ) transition tuples seen during play, where m is the size of the memory. The replay memory is used to reduce correlations between adjacent states and is shown to have large effect on the stability of training the network in some games.
3 ACTOR-MIMIC
3.1 POLICY REGRESSION OBJECTIVE
Given a set of source games S1, ..., SN , our ï¬rst goal is to obtain a single multitask policy network that can play any source game at as near an expert level as possible. To train this multitask policy network, we use guidance from a set of expert DQN networks E1, ..., EN , where Ei is an expert specialized in source task Si. One possible deï¬nition of âguidanceâ would be to deï¬ne a squared loss that would match Q-values between the student network and the experts. As the range of the expert value functions could vary widely between games, we found it difï¬cult to directly distill knowledge from the expert value functions. The alternative we develop here is to instead match policies by ï¬rst transforming Q-values using a softmax. Using the softmax gives us outputs which
2
Published as a conference paper at ICLR 2016 | 1511.06342#7 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06434 | 7 | # Under review as a conference paper at ICLR 2016
tectures that resulted in stable training across a range of datasets and allowed for training higher resolution and deeper generative models.
Core to our approach is adopting and modifying three recently demonstrated changes to CNN archi- tectures.
The ï¬rst is the all convolutional net (Springenberg et al., 2014) which replaces deterministic spatial pooling functions (such as maxpooling) with strided convolutions, allowing the network to learn its own spatial downsampling. We use this approach in our generator, allowing it to learn its own spatial upsampling, and discriminator. | 1511.06434#7 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | In recent years, supervised learning with convolutional networks (CNNs) has
seen huge adoption in computer vision applications. Comparatively, unsupervised
learning with CNNs has received less attention. In this work we hope to help
bridge the gap between the success of CNNs for supervised learning and
unsupervised learning. We introduce a class of CNNs called deep convolutional
generative adversarial networks (DCGANs), that have certain architectural
constraints, and demonstrate that they are a strong candidate for unsupervised
learning. Training on various image datasets, we show convincing evidence that
our deep convolutional adversarial pair learns a hierarchy of representations
from object parts to scenes in both the generator and discriminator.
Additionally, we use the learned features for novel tasks - demonstrating their
applicability as general image representations. | http://arxiv.org/pdf/1511.06434 | Alec Radford, Luke Metz, Soumith Chintala | cs.LG, cs.CV | Under review as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1505.00853"
},
{
"id": "1502.03167"
},
{
"id": "1502.04623"
},
{
"id": "1506.02351"
},
{
"id": "1506.03365"
},
{
"id": "1509.01240"
},
{
"id": "1503.03585"
},
{
"id": "1511.01844"
},
{
"id": "1506.05751"
},
{
"id": "1507.02672"
},
{
"id": "1510.02795"
}
] |
1511.06279 | 8 | Several recent advancements have extended recurrent networks to solve problems beyond simple sequence prediction. Graves et al. (2014) developed a neural Turing machine capable of learning and executing simple programs such as repeat copying, simple priority sorting and associative recall. Vinyals et al. (2015) developed Pointer Networks that generalize the notion of encoder attention in order to provide the decoder a variable-sized output space depending on the input sequence length. This model was shown to be effective for combinatorial optimization problems such as the traveling salesman and Delaunay triangulation. While our proposed model is trained on execution traces in- stead of input and output pairs, in exchange for this richer supervision we beneï¬t from compositional program structure, improving data efï¬ciency on several problems. | 1511.06279#8 | Neural Programmer-Interpreters | We propose the neural programmer-interpreter (NPI): a recurrent and
compositional neural network that learns to represent and execute programs. NPI
has three learnable components: a task-agnostic recurrent core, a persistent
key-value program memory, and domain-specific encoders that enable a single NPI
to operate in multiple perceptually diverse environments with distinct
affordances. By learning to compose lower-level programs to express
higher-level programs, NPI reduces sample complexity and increases
generalization ability compared to sequence-to-sequence LSTMs. The program
memory allows efficient learning of additional tasks by building on existing
programs. NPI can also harness the environment (e.g. a scratch pad with
read-write pointers) to cache intermediate results of computation, lessening
the long-term memory burden on recurrent hidden units. In this work we train
the NPI with fully-supervised execution traces; each program has example
sequences of calls to the immediate subprograms conditioned on the input.
Rather than training on a huge number of relatively weak labels, NPI learns
from a small number of rich examples. We demonstrate the capability of our
model to learn several types of compositional programs: addition, sorting, and
canonicalizing 3D models. Furthermore, a single NPI learns to execute these
programs and all 21 associated subprograms. | http://arxiv.org/pdf/1511.06279 | Scott Reed, Nando de Freitas | cs.LG, cs.NE | ICLR 2016 conference submission | null | cs.LG | 20151119 | 20160229 | [
{
"id": "1511.04834"
},
{
"id": "1505.00521"
},
{
"id": "1511.08228"
},
{
"id": "1511.07275"
},
{
"id": "1511.06392"
}
] |
1511.06297 | 8 | # 3.1 FAST VECTOR-JACOBIAN MULTIPLICATION
While Eqn (3) suggests that the Jacobian might have to be formed explicitly, Pearlmutter (1994) showed that computing a differential derivative suffices to compute left or right vector-Jacobian (or Hessian) multiplication. The same trick has also recently been revived with the class of so- called âHessian-freeâ (Martens, 2010) methods for artificial neural networks. Using the notation of Pearlmutter (1994), we write Ro, {-} = c' Vo, for the differential operator.
1 ~ Orr) Vol ~ Re, {log n(UY |S )} (4)
3.2 SPARSITY AND VARIANCE REGULARIZATIONS
In order to favour activation policies with sparse actions, we add two penalty terms Lb and Le that depend on some target sparsity rate Ï . The ï¬rst term pushes the policy distribution Ï to activate each unit with probability Ï in expectation over the data. The second term pushes the policy distribution to have the desired sparsity of activations for each example. Thus, for a low Ï , a valid conï¬guration would be to learn a few high probability activations for some part of the data and low probability activations for the rest of the data, which results in having activation probability Ï in expectation. | 1511.06297#8 | Conditional Computation in Neural Networks for faster models | Deep learning has become the state-of-art tool in many applications, but the
evaluation and training of deep models can be time-consuming and
computationally expensive. The conditional computation approach has been
proposed to tackle this problem (Bengio et al., 2013; Davis & Arel, 2013). It
operates by selectively activating only parts of the network at a time. In this
paper, we use reinforcement learning as a tool to optimize conditional
computation policies. More specifically, we cast the problem of learning
activation-dependent policies for dropping out blocks of units as a
reinforcement learning problem. We propose a learning scheme motivated by
computation speed, capturing the idea of wanting to have parsimonious
activations while maintaining prediction accuracy. We apply a policy gradient
algorithm for learning policies that optimize this loss function and propose a
regularization mechanism that encourages diversification of the dropout policy.
We present encouraging empirical results showing that this approach improves
the speed of computation without impacting the quality of the approximation. | http://arxiv.org/pdf/1511.06297 | Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, Doina Precup | cs.LG | ICLR 2016 submission, revised | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1502.01852"
},
{
"id": "1502.04623"
},
{
"id": "1502.03044"
}
] |
1511.06342 | 8 | 2
Published as a conference paper at ICLR 2016
are bounded in the unit interval and so the effects of the different scales of each expertâs Q-function are diminished, achieving higher stability during learning. Intuitively, we can view using the softmax from the perspective of forcing the student to focus more on mimicking the action chosen by the guiding expert at each state, where the exact values of the state are less important. We call this method âActor-Mimicâ as it is an actor, i.e. policy, that mimics the decisions of a set of experts. In particular, our technique ï¬rst transforms each expert DQN into a policy network by a Boltzmann distribution deï¬ned over the Q-value outputs,
e7 Qn, (8,0) Tp, (as) = a re Ce (3) aââ¬Ap,
Tp, (as) = a re Ce (3) aââ¬Ap, where 7 is a temperature parameter and Aj, is the action space used by the expert E;, Ap, C A. Given a state s from source task 5;, we then define the policy objective over the multitask network as the cross-entropy between the expert networkâs policy and the current multitask policy: | 1511.06342#8 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06434 | 8 | Second is the trend towards eliminating fully connected layers on top of convolutional features. The strongest example of this is global average pooling which has been utilized in state of the art image classiï¬cation models (Mordvintsev et al.). We found global average pooling increased model stability but hurt convergence speed. A middle ground of directly connecting the highest convolutional features to the input and output respectively of the generator and discriminator worked well. The ï¬rst layer of the GAN, which takes a uniform noise distribution Z as input, could be called fully connected as it is just a matrix multiplication, but the result is reshaped into a 4-dimensional tensor and used as the start of the convolution stack. For the discriminator, the last convolution layer is ï¬attened and then fed into a single sigmoid output. See Fig. 1 for a visualization of an example model architecture. | 1511.06434#8 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | In recent years, supervised learning with convolutional networks (CNNs) has
seen huge adoption in computer vision applications. Comparatively, unsupervised
learning with CNNs has received less attention. In this work we hope to help
bridge the gap between the success of CNNs for supervised learning and
unsupervised learning. We introduce a class of CNNs called deep convolutional
generative adversarial networks (DCGANs), that have certain architectural
constraints, and demonstrate that they are a strong candidate for unsupervised
learning. Training on various image datasets, we show convincing evidence that
our deep convolutional adversarial pair learns a hierarchy of representations
from object parts to scenes in both the generator and discriminator.
Additionally, we use the learned features for novel tasks - demonstrating their
applicability as general image representations. | http://arxiv.org/pdf/1511.06434 | Alec Radford, Luke Metz, Soumith Chintala | cs.LG, cs.CV | Under review as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1505.00853"
},
{
"id": "1502.03167"
},
{
"id": "1502.04623"
},
{
"id": "1506.02351"
},
{
"id": "1506.03365"
},
{
"id": "1509.01240"
},
{
"id": "1503.03585"
},
{
"id": "1511.01844"
},
{
"id": "1506.05751"
},
{
"id": "1507.02672"
},
{
"id": "1510.02795"
}
] |
1511.06279 | 9 | This work is also closely related to program induction. Most previous work on program induc- tion, i.e. inducing a program given example input and output pairs, has used genetic program- ming (Banzhaf et al., 1998) to evolve useful programs from candidate populations. Mou et al. (2014) process program symbols to learn max-margin program embeddings with the help of parse trees. Zaremba & Sutskever (2014) trained LSTM models to read in the text of simple programs Joulin & Mikolov (2015) aug- character-by-character and correctly predict the program output. mented a recurrent network with a pushdown stack, allowing for generalization to longer input sequences than seen during training for several algorithmic patterns.
Contemporary to this work, several papers have also studied program induction with variants of recurrent neural networks (Zaremba & Sutskever, 2015; Zaremba et al., 2015; Kaiser & Sutskever, 2015; Kurach et al., 2015; Neelakantan et al., 2015). While we share a similar motivation, our approach is distinct in that we explicitly incorporate compositional structure into the network using a program memory, allowing the model to learn new programs by combining sub-programs.
# 3 MODEL | 1511.06279#9 | Neural Programmer-Interpreters | We propose the neural programmer-interpreter (NPI): a recurrent and
compositional neural network that learns to represent and execute programs. NPI
has three learnable components: a task-agnostic recurrent core, a persistent
key-value program memory, and domain-specific encoders that enable a single NPI
to operate in multiple perceptually diverse environments with distinct
affordances. By learning to compose lower-level programs to express
higher-level programs, NPI reduces sample complexity and increases
generalization ability compared to sequence-to-sequence LSTMs. The program
memory allows efficient learning of additional tasks by building on existing
programs. NPI can also harness the environment (e.g. a scratch pad with
read-write pointers) to cache intermediate results of computation, lessening
the long-term memory burden on recurrent hidden units. In this work we train
the NPI with fully-supervised execution traces; each program has example
sequences of calls to the immediate subprograms conditioned on the input.
Rather than training on a huge number of relatively weak labels, NPI learns
from a small number of rich examples. We demonstrate the capability of our
model to learn several types of compositional programs: addition, sorting, and
canonicalizing 3D models. Furthermore, a single NPI learns to execute these
programs and all 21 associated subprograms. | http://arxiv.org/pdf/1511.06279 | Scott Reed, Nando de Freitas | cs.LG, cs.NE | ICLR 2016 conference submission | null | cs.LG | 20151119 | 20160229 | [
{
"id": "1511.04834"
},
{
"id": "1505.00521"
},
{
"id": "1511.08228"
},
{
"id": "1511.07275"
},
{
"id": "1511.06392"
}
] |
1511.06297 | 9 | n 1 Ly = > \IE{o3} â rMlo De = E{I(â 9) 23) ~ TIl2} (5) j 5
Since we are in a minibatch setting, these expectations can be approximated over the minibatch:
Roy me 1mm 4 Ly SO |â You) - Tle Le ® â SO MM(â Yo oy) - Te (6) 5 my My n j
We ï¬nally add a third term, Lv, in order to favour the aforementioned conï¬gurations, where units only have a high probability of activation for certain examples, and low for the rest. We aim to max- imize the variances of activations of each unit, across the data. This encourages unitsâ activations to be varied, and while similar in spirit to the Lb term, this term explicitly discourages learning a uniform distribution.
2
1 my 1 mp 2 Ly = â SO vari fois} & -S ⢠(«. _ (= Yu) (7) ij 6
3.3 ALGORITHM
We interleave the learning of the network parameters ÎN N and the learning of the policy parameters ÎÏ. We ï¬rst update the network and policy parameters to minimize the following regularized loss function via backpropagation (Rumelhart et al., 1988): | 1511.06297#9 | Conditional Computation in Neural Networks for faster models | Deep learning has become the state-of-art tool in many applications, but the
evaluation and training of deep models can be time-consuming and
computationally expensive. The conditional computation approach has been
proposed to tackle this problem (Bengio et al., 2013; Davis & Arel, 2013). It
operates by selectively activating only parts of the network at a time. In this
paper, we use reinforcement learning as a tool to optimize conditional
computation policies. More specifically, we cast the problem of learning
activation-dependent policies for dropping out blocks of units as a
reinforcement learning problem. We propose a learning scheme motivated by
computation speed, capturing the idea of wanting to have parsimonious
activations while maintaining prediction accuracy. We apply a policy gradient
algorithm for learning policies that optimize this loss function and propose a
regularization mechanism that encourages diversification of the dropout policy.
We present encouraging empirical results showing that this approach improves
the speed of computation without impacting the quality of the approximation. | http://arxiv.org/pdf/1511.06297 | Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, Doina Precup | cs.LG | ICLR 2016 submission, revised | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1502.01852"
},
{
"id": "1502.04623"
},
{
"id": "1502.03044"
}
] |
1511.06342 | 9 | Li policy(θ) = ÏEi(a|s) log ÏAMN(a|s; θ), (4)
# aâAEi
where ÏAMN(a|s; θ) is the multitask Actor-Mimic Network (AMN) policy, parameterized by θ. In contrast to the Q-learning objective which recursively relies on itself as a target value, we now have a stable supervised training signal (the expert network output) to guide the multitask network.
To acquire training data, we can sample either the expert network or the AMN action outputs to generate the trajectories used in the loss. Empirically we have observed that sampling from the AMN while it is learning gives the best results. We later prove that in either case of sampling from the expert or AMN as it is learning, the AMN will converge to the expert policy using the policy regression loss, at least in the case when the AMN is a linear function approximator. We use an e-greedy policy no matter which network we sample actions from, which with probability ⬠picks a random action uniformly and with probability 1 â ⬠chooses an action from the network.
3.2 FEATURE REGRESSION OBJECTIVE | 1511.06342#9 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06434 | 9 | Third is Batch Normalization (Ioffe & Szegedy, 2015) which stabilizes learning by normalizing the input to each unit to have zero mean and unit variance. This helps deal with training problems that arise due to poor initialization and helps gradient ï¬ow in deeper models. This proved critical to get deep generators to begin learning, preventing the generator from collapsing all samples to a single point which is a common failure mode observed in GANs. Directly applying batchnorm to all layers however, resulted in sample oscillation and model instability. This was avoided by not applying batchnorm to the generator output layer and the discriminator input layer.
The ReLU activation (Nair & Hinton, 2010) is used in the generator with the exception of the output layer which uses the Tanh function. We observed that using a bounded activation allowed the model to learn more quickly to saturate and cover the color space of the training distribution. Within the discriminator we found the leaky rectiï¬ed activation (Maas et al., 2013) (Xu et al., 2015) to work well, especially for higher resolution modeling. This is in contrast to the original GAN paper, which used the maxout activation (Goodfellow et al., 2013).
# Architecture guidelines for stable Deep Convolutional GANs | 1511.06434#9 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | In recent years, supervised learning with convolutional networks (CNNs) has
seen huge adoption in computer vision applications. Comparatively, unsupervised
learning with CNNs has received less attention. In this work we hope to help
bridge the gap between the success of CNNs for supervised learning and
unsupervised learning. We introduce a class of CNNs called deep convolutional
generative adversarial networks (DCGANs), that have certain architectural
constraints, and demonstrate that they are a strong candidate for unsupervised
learning. Training on various image datasets, we show convincing evidence that
our deep convolutional adversarial pair learns a hierarchy of representations
from object parts to scenes in both the generator and discriminator.
Additionally, we use the learned features for novel tasks - demonstrating their
applicability as general image representations. | http://arxiv.org/pdf/1511.06434 | Alec Radford, Luke Metz, Soumith Chintala | cs.LG, cs.CV | Under review as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1505.00853"
},
{
"id": "1502.03167"
},
{
"id": "1502.04623"
},
{
"id": "1506.02351"
},
{
"id": "1506.03365"
},
{
"id": "1509.01240"
},
{
"id": "1503.03585"
},
{
"id": "1511.01844"
},
{
"id": "1506.05751"
},
{
"id": "1507.02672"
},
{
"id": "1510.02795"
}
] |
1511.06279 | 10 | # 3 MODEL
The NPI core is a long short-term memory (LSTM) network (Hochreiter & Schmidhuber, 1997) that acts as a router between programs conditioned on the current state observation and previous hidden unit states. At each time step, the core module can select another program to invoke using content-based addressing. It emits the probability of ending the current program with a single binary unit. If this probability is over threshold (we used 0.5), control is returned to the caller by popping the callerâs LSTM hidden units and program embedding off of a program call stack and resuming execution in this context.
The NPI may also optionally write arguments (ARG) that are passed by reference or value to the invoked sub-programs. For example, an argument could indicate a speciï¬c location in the input sequence (by reference), or it could specify a number to write down at a particular location in the sequence (by value). The subsequent state consists of these arguments and observations of the environment. The approach is illustrated in Figures 1 and 2.
It must be emphasized that there is a single inference core. That is, all the LSTM instantiations executing arbitrary programs share the same parameters. Different programs correspond to program embeddings, which are stored in a learnable persistent memory. The programs therefore have a more
3
Published as a conference paper at ICLR 2016 | 1511.06279#10 | Neural Programmer-Interpreters | We propose the neural programmer-interpreter (NPI): a recurrent and
compositional neural network that learns to represent and execute programs. NPI
has three learnable components: a task-agnostic recurrent core, a persistent
key-value program memory, and domain-specific encoders that enable a single NPI
to operate in multiple perceptually diverse environments with distinct
affordances. By learning to compose lower-level programs to express
higher-level programs, NPI reduces sample complexity and increases
generalization ability compared to sequence-to-sequence LSTMs. The program
memory allows efficient learning of additional tasks by building on existing
programs. NPI can also harness the environment (e.g. a scratch pad with
read-write pointers) to cache intermediate results of computation, lessening
the long-term memory burden on recurrent hidden units. In this work we train
the NPI with fully-supervised execution traces; each program has example
sequences of calls to the immediate subprograms conditioned on the input.
Rather than training on a huge number of relatively weak labels, NPI learns
from a small number of rich examples. We demonstrate the capability of our
model to learn several types of compositional programs: addition, sorting, and
canonicalizing 3D models. Furthermore, a single NPI learns to execute these
programs and all 21 associated subprograms. | http://arxiv.org/pdf/1511.06279 | Scott Reed, Nando de Freitas | cs.LG, cs.NE | ICLR 2016 conference submission | null | cs.LG | 20151119 | 20160229 | [
{
"id": "1511.04834"
},
{
"id": "1505.00521"
},
{
"id": "1511.08228"
},
{
"id": "1511.07275"
},
{
"id": "1511.06392"
}
] |
1511.06297 | 10 | L=âlog P(Y |X, Onn) + As(Lo + Le) + Av(Lv) + Ax2||Oww ||? + Ax2||Oxl| where A, can be understood as a trade-off parameter between prediction accuracy and parsimony of computation (obtained through sparse node activation), and A, as a trade-off parameter between a stochastic policy and a more input dependent saturated policy. We then minimize the cost function C with a REINFORCE-style approach to update the policy parameters (Williams, 1992):
C = â log P (Y | X, ÎN N ) As previously mentioned, we use minibatch stochastic gradient descent as well as minibatch policy gradient updates. A detailed algorithm is available in appendix A.
3
# Under review as a conference paper at ICLR 2016
3.4 BLOCK ACTIVATION POLICY | 1511.06297#10 | Conditional Computation in Neural Networks for faster models | Deep learning has become the state-of-art tool in many applications, but the
evaluation and training of deep models can be time-consuming and
computationally expensive. The conditional computation approach has been
proposed to tackle this problem (Bengio et al., 2013; Davis & Arel, 2013). It
operates by selectively activating only parts of the network at a time. In this
paper, we use reinforcement learning as a tool to optimize conditional
computation policies. More specifically, we cast the problem of learning
activation-dependent policies for dropping out blocks of units as a
reinforcement learning problem. We propose a learning scheme motivated by
computation speed, capturing the idea of wanting to have parsimonious
activations while maintaining prediction accuracy. We apply a policy gradient
algorithm for learning policies that optimize this loss function and propose a
regularization mechanism that encourages diversification of the dropout policy.
We present encouraging empirical results showing that this approach improves
the speed of computation without impacting the quality of the approximation. | http://arxiv.org/pdf/1511.06297 | Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, Doina Precup | cs.LG | ICLR 2016 submission, revised | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1502.01852"
},
{
"id": "1502.04623"
},
{
"id": "1502.03044"
}
] |
1511.06342 | 10 | 3.2 FEATURE REGRESSION OBJECTIVE
We can obtain further guidance from the expert networks in the following way. Let hAMN(s) and hEi(s) be the hidden activations in the feature (pre-output) layer of the AMN and iâth expert net- work computed from the input state s, respectively. Note that the dimension of hAMN(s) does not necessarily need to be equal to hEi(s), and this is the case in some of our experiments. We deï¬ne a feature regression network fi(hAMN(s)) that, for a given state s, attempts to predict the features hEi(s) from hAMN(s). The architecture of the mapping fi can be deï¬ned arbitrarily, and fi can be trained using the following feature regression loss: | 1511.06342#10 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06434 | 10 | # Architecture guidelines for stable Deep Convolutional GANs
⢠Replace any pooling layers with strided convolutions (discriminator) and fractional-strided convolutions (generator).
Use batchnorm in both the generator and the discriminator. ⢠Remove fully connected hidden layers for deeper architectures. ⢠Use ReLU activation in generator for all layers except for the output, which uses Tanh. ⢠Use LeakyReLU activation in the discriminator for all layers.
# 4 DETAILS OF ADVERSARIAL TRAINING
We trained DCGANs on three datasets, Large-scale Scene Understanding (LSUN) (Yu et al., 2015), Imagenet-1k and a newly assembled Faces dataset. Details on the usage of each of these datasets are given below. | 1511.06434#10 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | In recent years, supervised learning with convolutional networks (CNNs) has
seen huge adoption in computer vision applications. Comparatively, unsupervised
learning with CNNs has received less attention. In this work we hope to help
bridge the gap between the success of CNNs for supervised learning and
unsupervised learning. We introduce a class of CNNs called deep convolutional
generative adversarial networks (DCGANs), that have certain architectural
constraints, and demonstrate that they are a strong candidate for unsupervised
learning. Training on various image datasets, we show convincing evidence that
our deep convolutional adversarial pair learns a hierarchy of representations
from object parts to scenes in both the generator and discriminator.
Additionally, we use the learned features for novel tasks - demonstrating their
applicability as general image representations. | http://arxiv.org/pdf/1511.06434 | Alec Radford, Luke Metz, Soumith Chintala | cs.LG, cs.CV | Under review as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1505.00853"
},
{
"id": "1502.03167"
},
{
"id": "1502.04623"
},
{
"id": "1506.02351"
},
{
"id": "1506.03365"
},
{
"id": "1509.01240"
},
{
"id": "1503.03585"
},
{
"id": "1511.01844"
},
{
"id": "1506.05751"
},
{
"id": "1507.02672"
},
{
"id": "1510.02795"
}
] |
1511.06279 | 11 | 3
Published as a conference paper at ICLR 2016
succinct representation than neural programs encoded as the full set of weights in a neural network (Rumelhart et al., 1986; Graves et al., 2014).
The output of an NPI, conditioned on an input state and a program to run, is a sequence of actions in a given environment. In this work, we consider several environments: a 1-D array with read-only pointers and a swap action, a 2-D scratch pad with read-write pointers, and a CAD renderer with controllable elevation and azimuth movements. Note that the sequence of actions for a program is not ï¬xed, but dependent also on the input state.
3.1 INFERENCE | 1511.06279#11 | Neural Programmer-Interpreters | We propose the neural programmer-interpreter (NPI): a recurrent and
compositional neural network that learns to represent and execute programs. NPI
has three learnable components: a task-agnostic recurrent core, a persistent
key-value program memory, and domain-specific encoders that enable a single NPI
to operate in multiple perceptually diverse environments with distinct
affordances. By learning to compose lower-level programs to express
higher-level programs, NPI reduces sample complexity and increases
generalization ability compared to sequence-to-sequence LSTMs. The program
memory allows efficient learning of additional tasks by building on existing
programs. NPI can also harness the environment (e.g. a scratch pad with
read-write pointers) to cache intermediate results of computation, lessening
the long-term memory burden on recurrent hidden units. In this work we train
the NPI with fully-supervised execution traces; each program has example
sequences of calls to the immediate subprograms conditioned on the input.
Rather than training on a huge number of relatively weak labels, NPI learns
from a small number of rich examples. We demonstrate the capability of our
model to learn several types of compositional programs: addition, sorting, and
canonicalizing 3D models. Furthermore, a single NPI learns to execute these
programs and all 21 associated subprograms. | http://arxiv.org/pdf/1511.06279 | Scott Reed, Nando de Freitas | cs.LG, cs.NE | ICLR 2016 conference submission | null | cs.LG | 20151119 | 20160229 | [
{
"id": "1511.04834"
},
{
"id": "1505.00521"
},
{
"id": "1511.08228"
},
{
"id": "1511.07275"
},
{
"id": "1511.06392"
}
] |
1511.06297 | 11 | 3
# Under review as a conference paper at ICLR 2016
3.4 BLOCK ACTIVATION POLICY
To achieve computational gain, instead of activating single units in hidden layers, we activate con- tiguous (equally-sized) groups of units together (independently for each example in the minibatch), thus reducing the action space as well as the number of probabilities to compute and sample. As such, there are two potential speedups. First, the policy is much smaller and faster to compute. Second, it offers a computational advantage in the computation of the hidden layer themselves, since we are now performing a matrix multiplication of the following form:
((H â MH )W ) â MO
where MH and MO are binary mask matrices. MO is obtained for each layer from the sampling of the policy as described in eq. 1: each sampled action (0 or 1) is repeated so as to span the corresponding block. MH is simply the mask of the previous layer. MH and MO resemble this (here there are 3 blocks of size 2):
0 1 0 0 1 0 1 0 ... 1 1 0 1 0 0 1 0 0 1
This allows us to quickly perform matrix multiplication by only considering the non-zero output elements as well as the non-zero elements in H â MH .
# 4 EXPERIMENTS
4.1 MODEL IMPLEMENTATION | 1511.06297#11 | Conditional Computation in Neural Networks for faster models | Deep learning has become the state-of-art tool in many applications, but the
evaluation and training of deep models can be time-consuming and
computationally expensive. The conditional computation approach has been
proposed to tackle this problem (Bengio et al., 2013; Davis & Arel, 2013). It
operates by selectively activating only parts of the network at a time. In this
paper, we use reinforcement learning as a tool to optimize conditional
computation policies. More specifically, we cast the problem of learning
activation-dependent policies for dropping out blocks of units as a
reinforcement learning problem. We propose a learning scheme motivated by
computation speed, capturing the idea of wanting to have parsimonious
activations while maintaining prediction accuracy. We apply a policy gradient
algorithm for learning policies that optimize this loss function and propose a
regularization mechanism that encourages diversification of the dropout policy.
We present encouraging empirical results showing that this approach improves
the speed of computation without impacting the quality of the approximation. | http://arxiv.org/pdf/1511.06297 | Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, Doina Precup | cs.LG | ICLR 2016 submission, revised | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1502.01852"
},
{
"id": "1502.04623"
},
{
"id": "1502.03044"
}
] |
1511.06342 | 11 | Liocaturenegression(9,97,) = ||filhamn(s; 0); 07,) â he, (s)II3 5 (5) where @ and @,, are the parameters of the AMN and i'â feature regression network, respectively. When training this objective, the error is fully back-propagated from the feature regression network output through the layers of the AMN. In this way, the feature regression objective provides pressure on the AMN to compute features that can predict an expertâs features. A justification for this objec- tive is that if we have a perfect regression from multitask to expert features, all the information in the expert features is contained in the multitask features. The use of the separate feature prediction network f; for each task enables the multitask network to have a different feature dimension than the experts as well as prevent issues with identifiability. Empirically we have found that the feature regression objectiveâs primary benefit is that it can increase the performance of transfer learning in some target tasks.
3.3 ACTOR-MIMIC OBJECTIVE
Combining both regression objectives, the Actor-Mimic objective is thus deï¬ned as F eatureRegression(θ, θfi), | 1511.06342#11 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06434 | 11 | No pre-processing was applied to training images besides scaling to the range of the tanh activation function [-1, 1]. All models were trained with mini-batch stochastic gradient descent (SGD) with a mini-batch size of 128. All weights were initialized from a zero-centered Normal distribution with standard deviation 0.02. In the LeakyReLU, the slope of the leak was set to 0.2 in all models. While previous GAN work has used momentum to accelerate training, we used the Adam optimizer (Kingma & Ba, 2014) with tuned hyperparameters. We found the suggested learning rate of 0.001, to be too high, using 0.0002 instead. Additionally, we found leaving the momentum term β1 at the
3
# Under review as a conference paper at ICLR 2016
Project and reshape G@)
Figure 1: DCGAN generator used for LSUN scene modeling. A 100 dimensional uniform distribu- tion Z is projected to a small spatial extent convolutional representation with many feature maps. A series of four fractionally-strided convolutions (in some recent papers, these are wrongly called deconvolutions) then convert this high level representation into a 64 Ã 64 pixel image. Notably, no fully connected or pooling layers are used.
suggested value of 0.9 resulted in training oscillation and instability while reducing it to 0.5 helped stabilize training.
# 4.1 LSUN | 1511.06434#11 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | In recent years, supervised learning with convolutional networks (CNNs) has
seen huge adoption in computer vision applications. Comparatively, unsupervised
learning with CNNs has received less attention. In this work we hope to help
bridge the gap between the success of CNNs for supervised learning and
unsupervised learning. We introduce a class of CNNs called deep convolutional
generative adversarial networks (DCGANs), that have certain architectural
constraints, and demonstrate that they are a strong candidate for unsupervised
learning. Training on various image datasets, we show convincing evidence that
our deep convolutional adversarial pair learns a hierarchy of representations
from object parts to scenes in both the generator and discriminator.
Additionally, we use the learned features for novel tasks - demonstrating their
applicability as general image representations. | http://arxiv.org/pdf/1511.06434 | Alec Radford, Luke Metz, Soumith Chintala | cs.LG, cs.CV | Under review as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1505.00853"
},
{
"id": "1502.03167"
},
{
"id": "1502.04623"
},
{
"id": "1506.02351"
},
{
"id": "1506.03365"
},
{
"id": "1509.01240"
},
{
"id": "1503.03585"
},
{
"id": "1511.01844"
},
{
"id": "1506.05751"
},
{
"id": "1507.02672"
},
{
"id": "1510.02795"
}
] |
1511.06279 | 12 | Denote the environment observation at time t as et â E, and the current program arguments as at â A. The form of et can vary dramatically by environment; for example it could be a color image or an array of numbers. The program arguments at can also vary by environment, but in the experiments for this paper we always used a 3-tuple of integers (at(1), at(2), at(3)). Given the environment and arguments at time t, a ï¬xed-length state encoding st â RD is extracted by a domain-speciï¬c encoder fenc : E ÃA â RD. In section 4 we provide examples of several encoders. Note that a single NPI network can have multiple encoders for multiple environments, and encoders can potentially also be shared across tasks. We denote the current program embedding as pt â RP . The previous hidden unit and cell states are h(l) tâ1 â RM , l = 1, ..., L where L is the number of layers in the LSTM. The program and state vectors are then propagated forward through an LSTM mapping flstm as in (Sutskever et al., 2014). How to fuse pt and st within flstm is an | 1511.06279#12 | Neural Programmer-Interpreters | We propose the neural programmer-interpreter (NPI): a recurrent and
compositional neural network that learns to represent and execute programs. NPI
has three learnable components: a task-agnostic recurrent core, a persistent
key-value program memory, and domain-specific encoders that enable a single NPI
to operate in multiple perceptually diverse environments with distinct
affordances. By learning to compose lower-level programs to express
higher-level programs, NPI reduces sample complexity and increases
generalization ability compared to sequence-to-sequence LSTMs. The program
memory allows efficient learning of additional tasks by building on existing
programs. NPI can also harness the environment (e.g. a scratch pad with
read-write pointers) to cache intermediate results of computation, lessening
the long-term memory burden on recurrent hidden units. In this work we train
the NPI with fully-supervised execution traces; each program has example
sequences of calls to the immediate subprograms conditioned on the input.
Rather than training on a huge number of relatively weak labels, NPI learns
from a small number of rich examples. We demonstrate the capability of our
model to learn several types of compositional programs: addition, sorting, and
canonicalizing 3D models. Furthermore, a single NPI learns to execute these
programs and all 21 associated subprograms. | http://arxiv.org/pdf/1511.06279 | Scott Reed, Nando de Freitas | cs.LG, cs.NE | ICLR 2016 conference submission | null | cs.LG | 20151119 | 20160229 | [
{
"id": "1511.04834"
},
{
"id": "1505.00521"
},
{
"id": "1511.08228"
},
{
"id": "1511.07275"
},
{
"id": "1511.06392"
}
] |
1511.06297 | 12 | # 4 EXPERIMENTS
4.1 MODEL IMPLEMENTATION
The proposed model was implemented within Theano (Bergstra et al., 2010), a standard library for deep learning and neural networks. In addition to using optimizations offered by Theano, we also implemented specialized matrix multiplication code for the operation exposed in section 3.4. A straightforward and fairly naive CPU implementation of this operation yielded speedups of up to 5-10x, while an equally naive GPU implementation yielded speedups of up to 2-4x, both for sparsity rates of under 20% and acceptable matrix and block sizes.1
We otherwise use fairly standard methods for our neural network. The weight matrices are initialized using the heuristic of Glorot & Bengio (2010). We use a constant learning rate throughout minibatch SGD. We also use early stopping (Bishop, 2006) to avoid overï¬tting. We only use fully-connected layers with tanh activations (reLu activations offer similar performance).
4.2 MODEL EVALUATION | 1511.06297#12 | Conditional Computation in Neural Networks for faster models | Deep learning has become the state-of-art tool in many applications, but the
evaluation and training of deep models can be time-consuming and
computationally expensive. The conditional computation approach has been
proposed to tackle this problem (Bengio et al., 2013; Davis & Arel, 2013). It
operates by selectively activating only parts of the network at a time. In this
paper, we use reinforcement learning as a tool to optimize conditional
computation policies. More specifically, we cast the problem of learning
activation-dependent policies for dropping out blocks of units as a
reinforcement learning problem. We propose a learning scheme motivated by
computation speed, capturing the idea of wanting to have parsimonious
activations while maintaining prediction accuracy. We apply a policy gradient
algorithm for learning policies that optimize this loss function and propose a
regularization mechanism that encourages diversification of the dropout policy.
We present encouraging empirical results showing that this approach improves
the speed of computation without impacting the quality of the approximation. | http://arxiv.org/pdf/1511.06297 | Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, Doina Precup | cs.LG | ICLR 2016 submission, revised | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1502.01852"
},
{
"id": "1502.04623"
},
{
"id": "1502.03044"
}
] |
1511.06342 | 12 | Combining both regression objectives, the Actor-Mimic objective is thus deï¬ned as F eatureRegression(θ, θfi),
policy(θ) + β â Li (6) where β is a scaling parameter which controls the relative weighting of the two objectives. Intu- itively, we can think of the policy regression objective as a teacher (expert network) telling a student (AMN) how they should act (mimic expertâs actions), while the feature regression objective is anal- ogous to a teacher telling a student why it should act that way (mimic expertâs thinking process).
3
Published as a conference paper at ICLR 2016
3.4 TRANSFERING KNOWLEDGE: ACTOR-MIMIC AS PRETRAINING | 1511.06342#12 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06434 | 12 | suggested value of 0.9 resulted in training oscillation and instability while reducing it to 0.5 helped stabilize training.
# 4.1 LSUN
As visual quality of samples from generative image models has improved, concerns of over-ï¬tting and memorization of training samples have risen. To demonstrate how our model scales with more data and higher resolution generation, we train a model on the LSUN bedrooms dataset containing a little over 3 million training examples. Recent analysis has shown that there is a direct link be- tween how fast models learn and their generalization performance (Hardt et al., 2015). We show samples from one epoch of training (Fig.2), mimicking online learning, in addition to samples after convergence (Fig.3), as an opportunity to demonstrate that our model is not producing high quality samples via simply overï¬tting/memorizing training examples. No data augmentation was applied to the images.
# 4.1.1 DEDUPLICATION | 1511.06434#12 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | In recent years, supervised learning with convolutional networks (CNNs) has
seen huge adoption in computer vision applications. Comparatively, unsupervised
learning with CNNs has received less attention. In this work we hope to help
bridge the gap between the success of CNNs for supervised learning and
unsupervised learning. We introduce a class of CNNs called deep convolutional
generative adversarial networks (DCGANs), that have certain architectural
constraints, and demonstrate that they are a strong candidate for unsupervised
learning. Training on various image datasets, we show convincing evidence that
our deep convolutional adversarial pair learns a hierarchy of representations
from object parts to scenes in both the generator and discriminator.
Additionally, we use the learned features for novel tasks - demonstrating their
applicability as general image representations. | http://arxiv.org/pdf/1511.06434 | Alec Radford, Luke Metz, Soumith Chintala | cs.LG, cs.CV | Under review as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1505.00853"
},
{
"id": "1502.03167"
},
{
"id": "1502.04623"
},
{
"id": "1506.02351"
},
{
"id": "1506.03365"
},
{
"id": "1509.01240"
},
{
"id": "1503.03585"
},
{
"id": "1511.01844"
},
{
"id": "1506.05751"
},
{
"id": "1507.02672"
},
{
"id": "1510.02795"
}
] |
1511.06279 | 13 | then propagated forward through an LSTM mapping flstm as in (Sutskever et al., 2014). How to fuse pt and st within flstm is an implementation detail, but in this work we concatenate and feed through a 2-layer MLP with rectiï¬ed linear (ReLU) hidden activation and linear decoder. From the top LSTM hidden state hL t , several decoders generate the outputs. The probability of ï¬nishing the program and returning to the caller 1 is computed by fend : RM â [0, 1]. The lookup key embedding used for retrieving the next program from memory is computed by fprog : RM â RK. Note that RK can be much smaller than RP because the key only need act as the identiï¬er of a program, while the program embedding must have enough capacity to conditionally generate a sequence of actions. The contents of the arguments to the next program to be called are generated by farg : RM â A. The feed-forward steps of program inference are summarized below: | 1511.06279#13 | Neural Programmer-Interpreters | We propose the neural programmer-interpreter (NPI): a recurrent and
compositional neural network that learns to represent and execute programs. NPI
has three learnable components: a task-agnostic recurrent core, a persistent
key-value program memory, and domain-specific encoders that enable a single NPI
to operate in multiple perceptually diverse environments with distinct
affordances. By learning to compose lower-level programs to express
higher-level programs, NPI reduces sample complexity and increases
generalization ability compared to sequence-to-sequence LSTMs. The program
memory allows efficient learning of additional tasks by building on existing
programs. NPI can also harness the environment (e.g. a scratch pad with
read-write pointers) to cache intermediate results of computation, lessening
the long-term memory burden on recurrent hidden units. In this work we train
the NPI with fully-supervised execution traces; each program has example
sequences of calls to the immediate subprograms conditioned on the input.
Rather than training on a huge number of relatively weak labels, NPI learns
from a small number of rich examples. We demonstrate the capability of our
model to learn several types of compositional programs: addition, sorting, and
canonicalizing 3D models. Furthermore, a single NPI learns to execute these
programs and all 21 associated subprograms. | http://arxiv.org/pdf/1511.06279 | Scott Reed, Nando de Freitas | cs.LG, cs.NE | ICLR 2016 conference submission | null | cs.LG | 20151119 | 20160229 | [
{
"id": "1511.04834"
},
{
"id": "1505.00521"
},
{
"id": "1511.08228"
},
{
"id": "1511.07275"
},
{
"id": "1511.06392"
}
] |
1511.06297 | 13 | 4.2 MODEL EVALUATION
We ï¬rst evaluate the performance of our model on the MNIST digit dataset. We use a single hidden layer of 16 blocks of 16 units (256 units total), with a target sparsity rate of Ï = 6.25% = 1/16, learning rates of 10â3 for the neural network and 5 à 10â5 for the policy, λv = λs = 200 and λL2 = 0.005. Under these conditions, a test error of around 2.3% was achieved. A normal neural network with the same number of hidden units achieves a test error of around 1.9%, while a normal neural network with a similar amount of computation (multiply-adds) being made (32 hidden units) achieves a test error of around 2.8%.
Looking at the activation of the policy (1c), we see that it tends towards what was hypothesized in section 3.2, i.e. where examples activate most units with low probability and some units with high probability. We can also observe that the policy is input-dependent in ï¬gures 1a and 1b, since we see different activation patterns for inputs of class â0â and inputs of class â1â. | 1511.06297#13 | Conditional Computation in Neural Networks for faster models | Deep learning has become the state-of-art tool in many applications, but the
evaluation and training of deep models can be time-consuming and
computationally expensive. The conditional computation approach has been
proposed to tackle this problem (Bengio et al., 2013; Davis & Arel, 2013). It
operates by selectively activating only parts of the network at a time. In this
paper, we use reinforcement learning as a tool to optimize conditional
computation policies. More specifically, we cast the problem of learning
activation-dependent policies for dropping out blocks of units as a
reinforcement learning problem. We propose a learning scheme motivated by
computation speed, capturing the idea of wanting to have parsimonious
activations while maintaining prediction accuracy. We apply a policy gradient
algorithm for learning policies that optimize this loss function and propose a
regularization mechanism that encourages diversification of the dropout policy.
We present encouraging empirical results showing that this approach improves
the speed of computation without impacting the quality of the approximation. | http://arxiv.org/pdf/1511.06297 | Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, Doina Precup | cs.LG | ICLR 2016 submission, revised | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1502.01852"
},
{
"id": "1502.04623"
},
{
"id": "1502.03044"
}
] |
1511.06342 | 13 | 3
Published as a conference paper at ICLR 2016
3.4 TRANSFERING KNOWLEDGE: ACTOR-MIMIC AS PRETRAINING
Now that we have a method of training a network that is an expert at all source tasks, we can proceed to the task of transferring source task knowledge to a novel but related target task. To enable transfer to a new task, we ï¬rst remove the ï¬nal softmax layer of the AMN. We then use the weights of AMN as an instantiation for a DQN that will be trained on the new target task. The pretrained DQN is then trained using the same training procedure as the one used with a standard DQN. Multitask pretraining can be seen as initializing the DQN with a set of features that are effective at deï¬ning policies in related tasks. If the source and target tasks share similarities, it is probable that some of these pretrained features will also be effective at the target task (perhaps after slight ï¬ne-tuning). | 1511.06342#13 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06434 | 13 | # 4.1.1 DEDUPLICATION
To further decrease the likelihood of the generator memorizing input examples (Fig.2) we perform a simple image de-duplication process. We ï¬t a 3072-128-3072 de-noising dropout regularized RELU autoencoder on 32x32 downsampled center-crops of training examples. The resulting code layer activations are then binarized via thresholding the ReLU activation which has been shown to be an effective information preserving technique (Srivastava et al., 2014) and provides a convenient form of semantic-hashing, allowing for linear time de-duplication . Visual inspection of hash collisions showed high precision with an estimated false positive rate of less than 1 in 100. Additionally, the technique detected and removed approximately 275,000 near duplicates, suggesting a high recall.
4.2 FACES
We scraped images containing human faces from random web image queries of peoples names. The people names were acquired from dbpedia, with a criterion that they were born in the modern era. This dataset has 3M images from 10K people. We run an OpenCV face detector on these images, keeping the detections that are sufï¬ciently high resolution, which gives us approximately 350,000 face boxes. We use these face boxes for training. No data augmentation was applied to the images.
4 | 1511.06434#13 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | In recent years, supervised learning with convolutional networks (CNNs) has
seen huge adoption in computer vision applications. Comparatively, unsupervised
learning with CNNs has received less attention. In this work we hope to help
bridge the gap between the success of CNNs for supervised learning and
unsupervised learning. We introduce a class of CNNs called deep convolutional
generative adversarial networks (DCGANs), that have certain architectural
constraints, and demonstrate that they are a strong candidate for unsupervised
learning. Training on various image datasets, we show convincing evidence that
our deep convolutional adversarial pair learns a hierarchy of representations
from object parts to scenes in both the generator and discriminator.
Additionally, we use the learned features for novel tasks - demonstrating their
applicability as general image representations. | http://arxiv.org/pdf/1511.06434 | Alec Radford, Luke Metz, Soumith Chintala | cs.LG, cs.CV | Under review as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1505.00853"
},
{
"id": "1502.03167"
},
{
"id": "1502.04623"
},
{
"id": "1506.02351"
},
{
"id": "1506.03365"
},
{
"id": "1509.01240"
},
{
"id": "1503.03585"
},
{
"id": "1511.01844"
},
{
"id": "1506.05751"
},
{
"id": "1507.02672"
},
{
"id": "1510.02795"
}
] |
1511.06279 | 14 | st = fenc(et, at) (1) ht = flstm(st, pt, htâ1) (2) rt = fend(ht), kt = fprog(ht), at+1 = farg(ht) (3) where rt, kt and at+1 correspond to the end-of-program probability, program key embedding, and output arguments at time t, respectively. These yield input arguments at time t + 1. To simplify the notation, we have abstracted properties such as layers and cell memory in the sequence-to-sequence LSTM of equation (2); see (Sutskever et al., 2014) for details. The NPI representation is equipped with key-value memory structures M key â RN ÃK and M prog â RN ÃP storing program keys and program embeddings, respectively, where N is the current number of programs in memory. We can add more programs by adding rows to memory. | 1511.06279#14 | Neural Programmer-Interpreters | We propose the neural programmer-interpreter (NPI): a recurrent and
compositional neural network that learns to represent and execute programs. NPI
has three learnable components: a task-agnostic recurrent core, a persistent
key-value program memory, and domain-specific encoders that enable a single NPI
to operate in multiple perceptually diverse environments with distinct
affordances. By learning to compose lower-level programs to express
higher-level programs, NPI reduces sample complexity and increases
generalization ability compared to sequence-to-sequence LSTMs. The program
memory allows efficient learning of additional tasks by building on existing
programs. NPI can also harness the environment (e.g. a scratch pad with
read-write pointers) to cache intermediate results of computation, lessening
the long-term memory burden on recurrent hidden units. In this work we train
the NPI with fully-supervised execution traces; each program has example
sequences of calls to the immediate subprograms conditioned on the input.
Rather than training on a huge number of relatively weak labels, NPI learns
from a small number of rich examples. We demonstrate the capability of our
model to learn several types of compositional programs: addition, sorting, and
canonicalizing 3D models. Furthermore, a single NPI learns to execute these
programs and all 21 associated subprograms. | http://arxiv.org/pdf/1511.06279 | Scott Reed, Nando de Freitas | cs.LG, cs.NE | ICLR 2016 conference submission | null | cs.LG | 20151119 | 20160229 | [
{
"id": "1511.04834"
},
{
"id": "1505.00521"
},
{
"id": "1511.08228"
},
{
"id": "1511.07275"
},
{
"id": "1511.06392"
}
] |
1511.06297 | 14 | Since the computation performed in our model is sparse, one could hope that it achieves this perfor- mance with less computation time, yet we consistently observe that models that deal with MNIST are too small to allow our specialized (3.4) sparse implementation to make a substantial difference. We include this result to highlight conditions under which it is less desirable to use our model.
1Implementations used in this paper are available at http://github.com/bengioe/condnet/
4
# Under review as a conference paper at ICLR 2016 | 1511.06297#14 | Conditional Computation in Neural Networks for faster models | Deep learning has become the state-of-art tool in many applications, but the
evaluation and training of deep models can be time-consuming and
computationally expensive. The conditional computation approach has been
proposed to tackle this problem (Bengio et al., 2013; Davis & Arel, 2013). It
operates by selectively activating only parts of the network at a time. In this
paper, we use reinforcement learning as a tool to optimize conditional
computation policies. More specifically, we cast the problem of learning
activation-dependent policies for dropping out blocks of units as a
reinforcement learning problem. We propose a learning scheme motivated by
computation speed, capturing the idea of wanting to have parsimonious
activations while maintaining prediction accuracy. We apply a policy gradient
algorithm for learning policies that optimize this loss function and propose a
regularization mechanism that encourages diversification of the dropout policy.
We present encouraging empirical results showing that this approach improves
the speed of computation without impacting the quality of the approximation. | http://arxiv.org/pdf/1511.06297 | Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, Doina Precup | cs.LG | ICLR 2016 submission, revised | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1502.01852"
},
{
"id": "1502.04623"
},
{
"id": "1502.03044"
}
] |
1511.06342 | 14 | 4 CONVERGENCE PROPERTIES OF ACTOR-MIMIC We further study the convergence properties of the proposed Actor-Mimic under a framework similar to (Perkins & Precup, 2002). The analysis mainly focuses on L2-regularized policy regression with- out feature regression. Without losing generality, the following analysis focuses on learning from a single game expert softmax policy ÏE. The analysis can be readily extended to consider multiple experts on multiple games by absorbing different games into the same state space. Let DÏ(s) be the stationary distribution of the Markov decision process under policy Ï over states s â S. The policy regression objective function can be rewritten using expectation under the stationary distribution of the Markov decision process:
1 eal san |(meCals) aavetalss6))] + AIO a s~ DAMN, e-greedy (.) | 1511.06342#14 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06279 | 15 | During training, the next program identiï¬er is provided to the model as ground-truth, so that its embedding can be retrieved from the corresponding row of M prog. At test time, we compute the âprogram IDâ by comparing the key embedding kt to each row of M key storing all program keys. Then the program embedding is retrieved from M prog as follows: iâ = arg max i=1..N
The next environmental state et+1 will be determined by the dynamics of the environment and can be affected by both the choice of program pt and the contents of the output arguments at, i.e.
et+1 â¼ fenv(et, pt, at) (5) The transition mapping fenv is domain-speciï¬c and will be discussed in Section 4. A description of the inference procedure is given in Algorithm 1.
1In our implementation, a program may ï¬rst call a subprogram before itself ï¬nishing. The only exception is the ACT program that signals a low-level action to the environment, e.g. moving a pointer one step left or writing a value. By convention ACT does not call any further sub-programs.
4
Published as a conference paper at ICLR 2016
Algorithm 1 Neural programming inference | 1511.06279#15 | Neural Programmer-Interpreters | We propose the neural programmer-interpreter (NPI): a recurrent and
compositional neural network that learns to represent and execute programs. NPI
has three learnable components: a task-agnostic recurrent core, a persistent
key-value program memory, and domain-specific encoders that enable a single NPI
to operate in multiple perceptually diverse environments with distinct
affordances. By learning to compose lower-level programs to express
higher-level programs, NPI reduces sample complexity and increases
generalization ability compared to sequence-to-sequence LSTMs. The program
memory allows efficient learning of additional tasks by building on existing
programs. NPI can also harness the environment (e.g. a scratch pad with
read-write pointers) to cache intermediate results of computation, lessening
the long-term memory burden on recurrent hidden units. In this work we train
the NPI with fully-supervised execution traces; each program has example
sequences of calls to the immediate subprograms conditioned on the input.
Rather than training on a huge number of relatively weak labels, NPI learns
from a small number of rich examples. We demonstrate the capability of our
model to learn several types of compositional programs: addition, sorting, and
canonicalizing 3D models. Furthermore, a single NPI learns to execute these
programs and all 21 associated subprograms. | http://arxiv.org/pdf/1511.06279 | Scott Reed, Nando de Freitas | cs.LG, cs.NE | ICLR 2016 conference submission | null | cs.LG | 20151119 | 20160229 | [
{
"id": "1511.04834"
},
{
"id": "1505.00521"
},
{
"id": "1511.08228"
},
{
"id": "1511.07275"
},
{
"id": "1511.06392"
}
] |
1511.06342 | 15 | 1 eal san |(meCals) aavetalss6))] + AIO a s~ DAMN, e-greedy (.)
where ((-) is the cross-entropy measure and ) is the coefficient of weight decay that is necessary in the following analysis of the policy regression. Under Actor-Mimic, the learning agent interacts with the environment by following an e-greedy strategy of some Q function. The mapping from a Q function to an ¢-greedy policy 7¢-greeay is denoted by an operator Iâ, where Te-greeay = I'(Q). To avoid confusion onwards, we use notation p(a|s; 0) for the softmax policies in the policy regression objective.
Assume each state in a Markov decision process is represented by a compact K-dimensional feature representation Ï(s) â RK. Consider a linear function approximator for Q values with parameter matrix θ â RKÃ|A|, ËQ(s, a; θ) = Ï(s)T θa, where θa is the ath column of θ. The corresponding softmax policy of the linear approximator is deï¬ned by p(a|s; θ) â exp{ ËQ(s, a; θ)}. | 1511.06342#15 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06279 | 16 | 4
Published as a conference paper at ICLR 2016
Algorithm 1 Neural programming inference
1: Inputs: Environment observation e, program id i, arguments a, stop threshold a 2: function RUN(i, a) 3: heoO,r-0,p + MP9 > Init LSTM and return probability. 4: while r < ado 5: 8 © fenc(e, a), h © fistm(s, p, h) > Feed-forward NPI one step. 6: r& fend lh), k= fprog(h), a2 â farg(h) 7: in © arg max( Mj)" k > Decide the next program to run. j=1.N 8: if i == ACT then e + fenv(e,p, a) > Update the environment based on ACT. 9: else RUN(i2, a2) > Run subprogram 72 with arguments a
Each task has a set of actions that affect the environment. For example, in addition there are LEFT and RIGHT actions that move a speciï¬ed pointer, and a WRITE action which writes a value at a speciï¬ed location. These actions are encapsulated into a general-purpose ACT program shared across tasks, and the concrete action to be taken is indicated by the NPI-generated arguments at. | 1511.06279#16 | Neural Programmer-Interpreters | We propose the neural programmer-interpreter (NPI): a recurrent and
compositional neural network that learns to represent and execute programs. NPI
has three learnable components: a task-agnostic recurrent core, a persistent
key-value program memory, and domain-specific encoders that enable a single NPI
to operate in multiple perceptually diverse environments with distinct
affordances. By learning to compose lower-level programs to express
higher-level programs, NPI reduces sample complexity and increases
generalization ability compared to sequence-to-sequence LSTMs. The program
memory allows efficient learning of additional tasks by building on existing
programs. NPI can also harness the environment (e.g. a scratch pad with
read-write pointers) to cache intermediate results of computation, lessening
the long-term memory burden on recurrent hidden units. In this work we train
the NPI with fully-supervised execution traces; each program has example
sequences of calls to the immediate subprograms conditioned on the input.
Rather than training on a huge number of relatively weak labels, NPI learns
from a small number of rich examples. We demonstrate the capability of our
model to learn several types of compositional programs: addition, sorting, and
canonicalizing 3D models. Furthermore, a single NPI learns to execute these
programs and all 21 associated subprograms. | http://arxiv.org/pdf/1511.06279 | Scott Reed, Nando de Freitas | cs.LG, cs.NE | ICLR 2016 conference submission | null | cs.LG | 20151119 | 20160229 | [
{
"id": "1511.04834"
},
{
"id": "1505.00521"
},
{
"id": "1511.08228"
},
{
"id": "1511.07275"
},
{
"id": "1511.06392"
}
] |
1511.06342 | 16 | 4.1 STOCHASTIC STATIONARY POLICY For any stationary policy Ïâ, the stationary point of the objective function Eq. (7) can be found by setting its gradient w.r.t. θ to zero. Let Pθ be a |S| à |A| matrix where its ith row jth column element is the softmax policy prediction p(aj|si; θ) from the linear approximator. Similarly, let Î E be a |S| à |A| matrix for the softmax policy prediction from the expert model. Additionally, let DÏ be a diagonal matrix whose entries are DÏ(s). A simple gradient following algorithm on the objective function Eq. (7) has the following expected update rule using a learning rate αt > 0 at the tth iteration:
AO, = âa4 | ®7 Dy (Po,_, â We) + AOâ-1]- (8)
Lemma 1. Under a fixed policy x* and a learning rate schedule that satisfies \~?-, a, = ~, 1 a? < 00, the parameters 0, updated by the stochastic gradient descent learning algorithm described above, asymptotically almost surely converge to a unique solution 0°. | 1511.06342#16 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06434 | 16 | Figure 3: Generated bedrooms after ï¬ve epochs of training. There appears to be evidence of visual under-ï¬tting via repeated noise textures across multiple samples such as the base boards of some of the beds.
4.3 IMAGENET-1K
We use Imagenet-1k (Deng et al., 2009) as a source of natural images for unsupervised training. We train on 32 Ã 32 min-resized center crops. No data augmentation was applied to the images.
5
# Under review as a conference paper at ICLR 2016
5 EMPIRICAL VALIDATION OF DCGANS CAPABILITIES
5.1 CLASSIFYING CIFAR-10 USING GANS AS A FEATURE EXTRACTOR
One common technique for evaluating the quality of unsupervised representation learning algo- rithms is to apply them as a feature extractor on supervised datasets and evaluate the performance of linear models ï¬tted on top of these features. | 1511.06434#16 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | In recent years, supervised learning with convolutional networks (CNNs) has
seen huge adoption in computer vision applications. Comparatively, unsupervised
learning with CNNs has received less attention. In this work we hope to help
bridge the gap between the success of CNNs for supervised learning and
unsupervised learning. We introduce a class of CNNs called deep convolutional
generative adversarial networks (DCGANs), that have certain architectural
constraints, and demonstrate that they are a strong candidate for unsupervised
learning. Training on various image datasets, we show convincing evidence that
our deep convolutional adversarial pair learns a hierarchy of representations
from object parts to scenes in both the generator and discriminator.
Additionally, we use the learned features for novel tasks - demonstrating their
applicability as general image representations. | http://arxiv.org/pdf/1511.06434 | Alec Radford, Luke Metz, Soumith Chintala | cs.LG, cs.CV | Under review as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1505.00853"
},
{
"id": "1502.03167"
},
{
"id": "1502.04623"
},
{
"id": "1506.02351"
},
{
"id": "1506.03365"
},
{
"id": "1509.01240"
},
{
"id": "1503.03585"
},
{
"id": "1511.01844"
},
{
"id": "1506.05751"
},
{
"id": "1507.02672"
},
{
"id": "1510.02795"
}
] |
1511.06279 | 17 | Note that the core LSTM module of our NPI representation is completely agnostic to the data modal- ity used to produce the state encoding. As long as the same ï¬xed-length embedding is extracted, the same module can in practice route between programs related to sorting arrays just as easily as between programs related to rotating 3D objects. In the experimental sections, we provide details of the modality-speciï¬c deep neural networks that we use to produce these ï¬xed-length state vectors.
3.2 TRAINING To train we use execution traces â¬;"â : {ez, iz, ax} and £2 : {iz41, ar41, 7}, t = 1,...T, where T is the sequence length. Program IDs i, and i,+1 are row-indices in M*°Y and M°*°S of the programs to run at time ¢ and t+ 1, respectively. We propose to directly maximize the probability of the correct execution trace output â¬Â°ââ conditioned on â¬'â?: @*
θâ = arg max log P (ξout|ξinp; θ) (6) θ (ξinp,ξout) | 1511.06279#17 | Neural Programmer-Interpreters | We propose the neural programmer-interpreter (NPI): a recurrent and
compositional neural network that learns to represent and execute programs. NPI
has three learnable components: a task-agnostic recurrent core, a persistent
key-value program memory, and domain-specific encoders that enable a single NPI
to operate in multiple perceptually diverse environments with distinct
affordances. By learning to compose lower-level programs to express
higher-level programs, NPI reduces sample complexity and increases
generalization ability compared to sequence-to-sequence LSTMs. The program
memory allows efficient learning of additional tasks by building on existing
programs. NPI can also harness the environment (e.g. a scratch pad with
read-write pointers) to cache intermediate results of computation, lessening
the long-term memory burden on recurrent hidden units. In this work we train
the NPI with fully-supervised execution traces; each program has example
sequences of calls to the immediate subprograms conditioned on the input.
Rather than training on a huge number of relatively weak labels, NPI learns
from a small number of rich examples. We demonstrate the capability of our
model to learn several types of compositional programs: addition, sorting, and
canonicalizing 3D models. Furthermore, a single NPI learns to execute these
programs and all 21 associated subprograms. | http://arxiv.org/pdf/1511.06279 | Scott Reed, Nando de Freitas | cs.LG, cs.NE | ICLR 2016 conference submission | null | cs.LG | 20151119 | 20160229 | [
{
"id": "1511.04834"
},
{
"id": "1505.00521"
},
{
"id": "1511.08228"
},
{
"id": "1511.07275"
},
{
"id": "1511.06392"
}
] |
1511.06297 | 17 | model condnet condnet condnet bdNN bdNN NN NN NN test error 0.511 0.514 0.497 0.629 0.590 0.560 0.546 0.497 Ï 1/24 1/16 1/16 0.17 0.2 - - - #blocks 24,24 16,32 10,10 10,10 10,10 64,64 128,128 480,480 block size 64 16 64 64 64 1 1 1 test time 6.8s(26.2s) 1.4s (8.2s) 2.0s(10.4s) 1.93s(10.3s) 2.8s(10.3s) 1.23s 2.31s 8.34s speedup 3.8Ã 5.7Ã 5.3Ã 5.3Ã 3.5Ã - - Figure 2: CIFAR-10, condnet: our approach, NN: Neural Network without the conditional activa- tions, bdNN, block dropout Neural Network using a uniform policy. âspeedupâ is how many times faster the forward pass is when using a specialized implementation (3.4). âtest timeâ is the time required to do a full pass over the test dataset using the implementation, on a CPU, running on a single core; in parenthesis is the time without the optimization. | 1511.06297#17 | Conditional Computation in Neural Networks for faster models | Deep learning has become the state-of-art tool in many applications, but the
evaluation and training of deep models can be time-consuming and
computationally expensive. The conditional computation approach has been
proposed to tackle this problem (Bengio et al., 2013; Davis & Arel, 2013). It
operates by selectively activating only parts of the network at a time. In this
paper, we use reinforcement learning as a tool to optimize conditional
computation policies. More specifically, we cast the problem of learning
activation-dependent policies for dropping out blocks of units as a
reinforcement learning problem. We propose a learning scheme motivated by
computation speed, capturing the idea of wanting to have parsimonious
activations while maintaining prediction accuracy. We apply a policy gradient
algorithm for learning policies that optimize this loss function and propose a
regularization mechanism that encourages diversification of the dropout policy.
We present encouraging empirical results showing that this approach improves
the speed of computation without impacting the quality of the approximation. | http://arxiv.org/pdf/1511.06297 | Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, Doina Precup | cs.LG | ICLR 2016 submission, revised | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1502.01852"
},
{
"id": "1502.04623"
},
{
"id": "1502.03044"
}
] |
1511.06342 | 17 | When the policy Ïâ is ï¬xed, the objective function Eq. (7) is convex and is the same as a multinomial logistic regression problem with a bounded Lipschitz constant due to its compact input features. Hence there is a unique stationary point θâ such that âθâ = 0. The proof of Lemma 1 follows the stochastic approximation argument (Robbins & Monro, 1951).
4.2 STOCHASTIC ADAPTIVE POLICY Consider the following learning scheme to adapt the agentâs policy. The learning agent interacts with the environment and samples states by following a fixed e-greedy policy 7â. Given the samples
4
Published as a conference paper at ICLR 2016
BOXING 405 ATLANTIS © [âAMN | IâDQN |âDQN-Max |DQN-Mean BREAKOUT _. 104 CRAZY CLIMBER a 3 8 00r sz Or os- PONG SEAQUEST SPACE INVADERS 0008 0 o0sz 0 00s 000r oszt 0 50 100 0 50 100 50 100 0 50 100 | 1511.06342#17 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06434 | 17 | On the CIFAR-10 dataset, a very strong baseline performance has been demonstrated from a well tuned single layer feature extraction pipeline utilizing K-means as a feature learning algorithm. When using a very large amount of feature maps (4800) this technique achieves 80.6% accuracy. An unsupervised multi-layered extension of the base algorithm reaches 82.0% accuracy (Coates & Ng, 2011). To evaluate the quality of the representations learned by DCGANs for supervised tasks, we train on Imagenet-1k and then use the discriminatorâs convolutional features from all layers, maxpooling each layers representation to produce a 4 à 4 spatial grid. These features are then ï¬attened and concatenated to form a 28672 dimensional vector and a regularized linear L2-SVM classiï¬er is trained on top of them. This achieves 82.8% accuracy, out performing all K-means based approaches. Notably, the discriminator has many less feature maps (512 in the highest layer) compared to K-means based techniques, but does result in a larger total feature vector size due to the many layers of 4 à 4 spatial locations. The performance of DCGANs | 1511.06434#17 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | In recent years, supervised learning with convolutional networks (CNNs) has
seen huge adoption in computer vision applications. Comparatively, unsupervised
learning with CNNs has received less attention. In this work we hope to help
bridge the gap between the success of CNNs for supervised learning and
unsupervised learning. We introduce a class of CNNs called deep convolutional
generative adversarial networks (DCGANs), that have certain architectural
constraints, and demonstrate that they are a strong candidate for unsupervised
learning. Training on various image datasets, we show convincing evidence that
our deep convolutional adversarial pair learns a hierarchy of representations
from object parts to scenes in both the generator and discriminator.
Additionally, we use the learned features for novel tasks - demonstrating their
applicability as general image representations. | http://arxiv.org/pdf/1511.06434 | Alec Radford, Luke Metz, Soumith Chintala | cs.LG, cs.CV | Under review as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1505.00853"
},
{
"id": "1502.03167"
},
{
"id": "1502.04623"
},
{
"id": "1506.02351"
},
{
"id": "1506.03365"
},
{
"id": "1509.01240"
},
{
"id": "1503.03585"
},
{
"id": "1511.01844"
},
{
"id": "1506.05751"
},
{
"id": "1507.02672"
},
{
"id": "1510.02795"
}
] |
1511.06279 | 18 | θâ = arg max log P (ξout|ξinp; θ) (6) θ (ξinp,ξout)
where θ are the parameters of our model. Since the traces are variable in length depending on the input, we apply the chain rule to model the joint probability over ξout
T log P(Eoutl&inp: 9) = > log P(E EL? on 61"? 8) (7) t=1
Note that for many problems the input history ξinp is critical to deciding future actions 1 because the environment observation at the current time-step et alone does not contain enough in- formation. The hidden unit activations of the LSTM in NPI are capable of capturing these temporal dependencies. The single-step conditional probability in equation (7) can be factorized into three further conditional distributions, corresponding to predicting the next program, next arguments, and whether to halt execution: |ξinp 1
(8) where ht is the output of flstm at time t, carrying information from previous time steps. We train by gradient ascent on the likelihood in equation (7). | 1511.06279#18 | Neural Programmer-Interpreters | We propose the neural programmer-interpreter (NPI): a recurrent and
compositional neural network that learns to represent and execute programs. NPI
has three learnable components: a task-agnostic recurrent core, a persistent
key-value program memory, and domain-specific encoders that enable a single NPI
to operate in multiple perceptually diverse environments with distinct
affordances. By learning to compose lower-level programs to express
higher-level programs, NPI reduces sample complexity and increases
generalization ability compared to sequence-to-sequence LSTMs. The program
memory allows efficient learning of additional tasks by building on existing
programs. NPI can also harness the environment (e.g. a scratch pad with
read-write pointers) to cache intermediate results of computation, lessening
the long-term memory burden on recurrent hidden units. In this work we train
the NPI with fully-supervised execution traces; each program has example
sequences of calls to the immediate subprograms conditioned on the input.
Rather than training on a huge number of relatively weak labels, NPI learns
from a small number of rich examples. We demonstrate the capability of our
model to learn several types of compositional programs: addition, sorting, and
canonicalizing 3D models. Furthermore, a single NPI learns to execute these
programs and all 21 associated subprograms. | http://arxiv.org/pdf/1511.06279 | Scott Reed, Nando de Freitas | cs.LG, cs.NE | ICLR 2016 conference submission | null | cs.LG | 20151119 | 20160229 | [
{
"id": "1511.04834"
},
{
"id": "1505.00521"
},
{
"id": "1511.08228"
},
{
"id": "1511.07275"
},
{
"id": "1511.06392"
}
] |
1511.06297 | 18 | Next, we consider the performance of our model on the CIFAR-10 (Krizhevsky & Hinton, 2009) image dataset. A brief hyperparameter search was made, and a few of the best models are shown in ï¬gure 2. These results show that it is possible to achieve similar performance with our model (de- noted condnet) as with a normal neural network (denoted NN), yet using sensibly reduced computa- tion time. A few things are worth noting; we can set Ï to be lower than 1 over the number of blocks, since the model learns a policy that is actually not as sparse as Ï , mostly because REINFORCE pulls the policy towards higher probabilities on average. For example our best performing model has a target of 1/16 but learns policies that average an 18% sparsity rate (we used λv = λs = 20, except for the ï¬rst layer λv = 40, we used λL2 = 0.01, and the learning rates were 0.001 for the neural net, 10â5 and 5 à 10â4 for the ï¬rst and second policy layers respectively). The neural networks without conditional activations are trained with L2 regularization as well as regular unit-wise dropout. We also train | 1511.06297#18 | Conditional Computation in Neural Networks for faster models | Deep learning has become the state-of-art tool in many applications, but the
evaluation and training of deep models can be time-consuming and
computationally expensive. The conditional computation approach has been
proposed to tackle this problem (Bengio et al., 2013; Davis & Arel, 2013). It
operates by selectively activating only parts of the network at a time. In this
paper, we use reinforcement learning as a tool to optimize conditional
computation policies. More specifically, we cast the problem of learning
activation-dependent policies for dropping out blocks of units as a
reinforcement learning problem. We propose a learning scheme motivated by
computation speed, capturing the idea of wanting to have parsimonious
activations while maintaining prediction accuracy. We apply a policy gradient
algorithm for learning policies that optimize this loss function and propose a
regularization mechanism that encourages diversification of the dropout policy.
We present encouraging empirical results showing that this approach improves
the speed of computation without impacting the quality of the approximation. | http://arxiv.org/pdf/1511.06297 | Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, Doina Precup | cs.LG | ICLR 2016 submission, revised | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1502.01852"
},
{
"id": "1502.04623"
},
{
"id": "1502.03044"
}
] |
1511.06342 | 18 | Figure 1: The Actor-Mimic and expert DQN training curves for 100 training epochs for each of the 8 games. A training epoch is 250,000 frames and for each training epoch we evaluate the networks with a testing epoch that lasts 125,000 frames. We report AMN and expert DQN test reward for each testing epoch and the mean and max of DQN performance. The max is calculated over all testing epochs that the DQN experienced until convergence while the mean is calculated over the last ten epochs before the DQN training was stopped. In the testing epoch we use ¢ = 0.05 in the e-greedy policy. The y-axis is the average unscaled episode reward during a testing epoch. The AMN results are averaged over 2 separately trained networks. | 1511.06342#18 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06434 | 18 | K-means based techniques, but does result in a larger total feature vector size due to the many layers of 4 à 4 spatial locations. The performance of DCGANs is still less than that of Exemplar CNNs (Dosovitskiy et al., 2015), a technique which trains normal discriminative CNNs in an unsupervised fashion to differentiate between speciï¬cally chosen, aggressively augmented, exemplar samples from the source dataset. Further improvements could be made by ï¬netuning the discriminatorâs representations, but we leave this for future work. Additionally, since our DCGAN was never trained on CIFAR-10 this experiment also demonstrates the domain robustness of the learned features. | 1511.06434#18 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | In recent years, supervised learning with convolutional networks (CNNs) has
seen huge adoption in computer vision applications. Comparatively, unsupervised
learning with CNNs has received less attention. In this work we hope to help
bridge the gap between the success of CNNs for supervised learning and
unsupervised learning. We introduce a class of CNNs called deep convolutional
generative adversarial networks (DCGANs), that have certain architectural
constraints, and demonstrate that they are a strong candidate for unsupervised
learning. Training on various image datasets, we show convincing evidence that
our deep convolutional adversarial pair learns a hierarchy of representations
from object parts to scenes in both the generator and discriminator.
Additionally, we use the learned features for novel tasks - demonstrating their
applicability as general image representations. | http://arxiv.org/pdf/1511.06434 | Alec Radford, Luke Metz, Soumith Chintala | cs.LG, cs.CV | Under review as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1505.00853"
},
{
"id": "1502.03167"
},
{
"id": "1502.04623"
},
{
"id": "1506.02351"
},
{
"id": "1506.03365"
},
{
"id": "1509.01240"
},
{
"id": "1503.03585"
},
{
"id": "1511.01844"
},
{
"id": "1506.05751"
},
{
"id": "1507.02672"
},
{
"id": "1510.02795"
}
] |
1511.06279 | 19 | (8) where ht is the output of flstm at time t, carrying information from previous time steps. We train by gradient ascent on the likelihood in equation (7).
We used an adaptive curriculum in which training examples for each mini-batch are fetched with fre- quency proportional to the modelâs current prediction error for the corresponding program. Specif- ically, we set the sampling frequency using a softmax over average prediction error across all pro- grams, with conï¬gurable temperature. Every 1000 steps of training we re-estimated these prediction errors. Intuitively, this forces the model to focus on learning the program for which it currently per- forms worst in executing. We found that the adaptive curriculum immediately worked much better than our best-performing hand-designed curriculum, allowing a multi-task NPI to achieve compara- ble performance to single-task NPI on all tasks.
We also note that our program has a distinct memory advantage over basic LSTMs because all sub- programs can be trained in parallel. For programs whose execution length grows e.g. quadratically
5
Published as a conference paper at ICLR 2016
Figure 3: Illustration of the addition environment used in our experiments. | 1511.06279#19 | Neural Programmer-Interpreters | We propose the neural programmer-interpreter (NPI): a recurrent and
compositional neural network that learns to represent and execute programs. NPI
has three learnable components: a task-agnostic recurrent core, a persistent
key-value program memory, and domain-specific encoders that enable a single NPI
to operate in multiple perceptually diverse environments with distinct
affordances. By learning to compose lower-level programs to express
higher-level programs, NPI reduces sample complexity and increases
generalization ability compared to sequence-to-sequence LSTMs. The program
memory allows efficient learning of additional tasks by building on existing
programs. NPI can also harness the environment (e.g. a scratch pad with
read-write pointers) to cache intermediate results of computation, lessening
the long-term memory burden on recurrent hidden units. In this work we train
the NPI with fully-supervised execution traces; each program has example
sequences of calls to the immediate subprograms conditioned on the input.
Rather than training on a huge number of relatively weak labels, NPI learns
from a small number of rich examples. We demonstrate the capability of our
model to learn several types of compositional programs: addition, sorting, and
canonicalizing 3D models. Furthermore, a single NPI learns to execute these
programs and all 21 associated subprograms. | http://arxiv.org/pdf/1511.06279 | Scott Reed, Nando de Freitas | cs.LG, cs.NE | ICLR 2016 conference submission | null | cs.LG | 20151119 | 20160229 | [
{
"id": "1511.04834"
},
{
"id": "1505.00521"
},
{
"id": "1511.08228"
},
{
"id": "1511.07275"
},
{
"id": "1511.06392"
}
] |
1511.06297 | 19 | and second policy layers respectively). The neural networks without conditional activations are trained with L2 regularization as well as regular unit-wise dropout. We also train networks with the same architecture as our models, using blocks, but with a uniform policy (as in original dropout) instead of a learned conditional one. This model (denoted bdNN) does not perform as well as our model, showing that the dropout noise by itself is not sufï¬cient, and that learning a policy is required to fully take beneï¬t of this architecture. | 1511.06297#19 | Conditional Computation in Neural Networks for faster models | Deep learning has become the state-of-art tool in many applications, but the
evaluation and training of deep models can be time-consuming and
computationally expensive. The conditional computation approach has been
proposed to tackle this problem (Bengio et al., 2013; Davis & Arel, 2013). It
operates by selectively activating only parts of the network at a time. In this
paper, we use reinforcement learning as a tool to optimize conditional
computation policies. More specifically, we cast the problem of learning
activation-dependent policies for dropping out blocks of units as a
reinforcement learning problem. We propose a learning scheme motivated by
computation speed, capturing the idea of wanting to have parsimonious
activations while maintaining prediction accuracy. We apply a policy gradient
algorithm for learning policies that optimize this loss function and propose a
regularization mechanism that encourages diversification of the dropout policy.
We present encouraging empirical results showing that this approach improves
the speed of computation without impacting the quality of the approximation. | http://arxiv.org/pdf/1511.06297 | Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, Doina Precup | cs.LG | ICLR 2016 submission, revised | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1502.01852"
},
{
"id": "1502.04623"
},
{
"id": "1502.03044"
}
] |
1511.06342 | 19 | and the expert prediction, the linear function approximator parameters are updated using Eq. to a unique stationary point 0â. The new parameters 6â are then used to establish a new e-greedy policy xâ = T(Qor) through the Iâ operator over the linear function Qg. The agent under the new policy m"â subsequently samples a new set of states and actions from the Markov decision process to update its parameters. The learning agent therefore generates a sequence of policies {7!, 77, 73, ...}. The proof for the following theorem is given in Appendi Theorem 1. Assume the Markov decision process is irreducible and aperiodic for any policy 7 induced by the V operator and is Lipschitz continuous with a constant c., then the sequence of policies and model parameters generated by the iterative algorithm above converges almost surely to a unique solution x* and 6*. | 1511.06342#19 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06434 | 19 | Table 1: CIFAR-10 classiï¬cation results using our pre-trained model. Our DCGAN is not pre- trained on CIFAR-10, but on Imagenet-1k, and the features are used to classify CIFAR-10 images.
Model 1 Layer K-means 3 Layer K-means Learned RF View Invariant K-means Exemplar CNN DCGAN (ours) + L2-SVM Accuracy Accuracy (400 per class) max # of features units 80.6% 82.0% 81.9% 84.3% 82.8% 63.7% (±0.7%) 70.7% (±0.7%) 72.6% (±0.7%) 77.4% (±0.2%) 73.8% (±0.4%) 4800 3200 6400 1024 512
5.2 CLASSIFYING SVHN DIGITS USING GANS AS A FEATURE EXTRACTOR | 1511.06434#19 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | In recent years, supervised learning with convolutional networks (CNNs) has
seen huge adoption in computer vision applications. Comparatively, unsupervised
learning with CNNs has received less attention. In this work we hope to help
bridge the gap between the success of CNNs for supervised learning and
unsupervised learning. We introduce a class of CNNs called deep convolutional
generative adversarial networks (DCGANs), that have certain architectural
constraints, and demonstrate that they are a strong candidate for unsupervised
learning. Training on various image datasets, we show convincing evidence that
our deep convolutional adversarial pair learns a hierarchy of representations
from object parts to scenes in both the generator and discriminator.
Additionally, we use the learned features for novel tasks - demonstrating their
applicability as general image representations. | http://arxiv.org/pdf/1511.06434 | Alec Radford, Luke Metz, Soumith Chintala | cs.LG, cs.CV | Under review as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1505.00853"
},
{
"id": "1502.03167"
},
{
"id": "1502.04623"
},
{
"id": "1506.02351"
},
{
"id": "1506.03365"
},
{
"id": "1509.01240"
},
{
"id": "1503.03585"
},
{
"id": "1511.01844"
},
{
"id": "1506.05751"
},
{
"id": "1507.02672"
},
{
"id": "1510.02795"
}
] |
1511.06279 | 20 | 5
Published as a conference paper at ICLR 2016
Figure 3: Illustration of the addition environment used in our experiments.
(a) Example scratch pad and pointers used for computing â96 + 125 = 221â. Carry step is being implemented. (b) Actual trace of addition program generated by our model on the problem shown to the left. Note that we substituted the ACT calls in the trace with more human-readable steps.
with the input sequence length, an LSTM will by highly constrained by device memory to train on short sequences. By exploiting compositionality, an effective curriculum can often be developed with sublinear-length subprograms, enabling our NPI model to train on order of magnitude larger sequences than the LSTM.
# 4 EXPERIMENTS
This section describes the environment and state encoder function for each task, and shows example outputs and prediction accuracy results. For all tasks, the core LSTM had two layers of size 256. We trained the NPI using the ADAM solver (Kingma & Ba, 2015) with base learning rate 0.0001, batch size 1, and decayed the learning rate by a factor of 0.95 every 10,000 steps.
# 4.1 TASK AND ENVIRONMENT DESCRIPTIONS | 1511.06279#20 | Neural Programmer-Interpreters | We propose the neural programmer-interpreter (NPI): a recurrent and
compositional neural network that learns to represent and execute programs. NPI
has three learnable components: a task-agnostic recurrent core, a persistent
key-value program memory, and domain-specific encoders that enable a single NPI
to operate in multiple perceptually diverse environments with distinct
affordances. By learning to compose lower-level programs to express
higher-level programs, NPI reduces sample complexity and increases
generalization ability compared to sequence-to-sequence LSTMs. The program
memory allows efficient learning of additional tasks by building on existing
programs. NPI can also harness the environment (e.g. a scratch pad with
read-write pointers) to cache intermediate results of computation, lessening
the long-term memory burden on recurrent hidden units. In this work we train
the NPI with fully-supervised execution traces; each program has example
sequences of calls to the immediate subprograms conditioned on the input.
Rather than training on a huge number of relatively weak labels, NPI learns
from a small number of rich examples. We demonstrate the capability of our
model to learn several types of compositional programs: addition, sorting, and
canonicalizing 3D models. Furthermore, a single NPI learns to execute these
programs and all 21 associated subprograms. | http://arxiv.org/pdf/1511.06279 | Scott Reed, Nando de Freitas | cs.LG, cs.NE | ICLR 2016 conference submission | null | cs.LG | 20151119 | 20160229 | [
{
"id": "1511.04834"
},
{
"id": "1505.00521"
},
{
"id": "1511.08228"
},
{
"id": "1511.07275"
},
{
"id": "1511.06392"
}
] |
1511.06297 | 20 | 5
# Under review as a conference paper at ICLR 2016
0.40 T T e * ° a NN 0.35 e 4 > ee 4 A be e@ = condnet: e 0.30 0.25 0.20 valid error (%) 0.15 0.10 0.05 time of validation (sec)
Figure 3: SVHN, each point is an experiment. The x axis is the time required to do a full pass over the valid dataset (log scale, lower is better). Note that we plot the full hyperparameter exploration results, which is why condnet results are so varied.
model condnet condnet condnet NN NN NN test error 0.183 0.139 0.073 0.116 0.100 0.091 Ï 1/11 1/25,1/7 1/22 - - - #blocks 13,8 27,7 25,22 288,928 800,736 1280,1056 block size 16 16 32 1 1 1 test time 1.5s(2.2s) 2.8s (4.3s) 10.2s(14.1s) 4.8s 10.7s 16.8s speedup 1.4à 1.6à 1.4à - - Figure 4: SVHN results (see ï¬g 2) | 1511.06297#20 | Conditional Computation in Neural Networks for faster models | Deep learning has become the state-of-art tool in many applications, but the
evaluation and training of deep models can be time-consuming and
computationally expensive. The conditional computation approach has been
proposed to tackle this problem (Bengio et al., 2013; Davis & Arel, 2013). It
operates by selectively activating only parts of the network at a time. In this
paper, we use reinforcement learning as a tool to optimize conditional
computation policies. More specifically, we cast the problem of learning
activation-dependent policies for dropping out blocks of units as a
reinforcement learning problem. We propose a learning scheme motivated by
computation speed, capturing the idea of wanting to have parsimonious
activations while maintaining prediction accuracy. We apply a policy gradient
algorithm for learning policies that optimize this loss function and propose a
regularization mechanism that encourages diversification of the dropout policy.
We present encouraging empirical results showing that this approach improves
the speed of computation without impacting the quality of the approximation. | http://arxiv.org/pdf/1511.06297 | Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, Doina Precup | cs.LG | ICLR 2016 submission, revised | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1502.01852"
},
{
"id": "1502.04623"
},
{
"id": "1502.03044"
}
] |
1511.06342 | 20 | 4.3. PERFORMANCE GUARANTEE The convergence theorem implies the Actor-Mimic learning algorithm also belongs to the family of no-regret algorithms in the online learning framework, see{Ross et al.| for more details. Their theoretical analysis can be directly applied to Actor-Mimic and results in a performance guarantee bound on how well the Actor-Mimic model performs with respect to the guiding expert. Let Zi (s, a) be the t-step reward of executing 7 in the initial state s and then following policy aâ. The cost-to-go for a policy 7 after T-steps is defined as Jp(7) = âT Es~ pi.) [R(s, a)], where R(s, a) is the reward after executing action a in state s. Proposition 1. For the iterative algorithm described in Section (42), if the loss function in Eq. converges to ⬠with the solution Tamy and LE a aals, 1) = Zh awa(s, a) > uforallactionsa ⬠A andt ⬠{1,-++ ,T}, then the cost-to-go of Actor-Mimic Jr(mamn) grows linearly after executing T actions: Jp (mamn) < Jr (mE) + uTe/ log 2. | 1511.06342#20 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06434 | 20 | 5.2 CLASSIFYING SVHN DIGITS USING GANS AS A FEATURE EXTRACTOR
On the StreetView House Numbers dataset (SVHN)(Netzer et al., 2011), we use the features of the discriminator of a DCGAN for supervised purposes when labeled data is scarce. Following similar dataset preparation rules as in the CIFAR-10 experiments, we split off a validation set of 10,000 examples from the non-extra set and use it for all hyperparameter and model selection. 1000 uniformly class distributed training examples are randomly selected and used to train a regularized linear L2-SVM classiï¬er on top of the same feature extraction pipeline used for CIFAR-10. This achieves state of the art (for classiï¬cation using 1000 labels) at 22.48% test error, improving upon another modifcation of CNNs designed to leverage unlabled data (Zhao et al., 2015). Additionally, we validate that the CNN architecture used in DCGAN is not the key contributing factor of the modelâs performance by training a purely supervised CNN with the same architecture on the same data and optimizing this model via random search over 64 hyperparameter trials (Bergstra & Bengio, 2012). It achieves a signï¬cantly higher 28.87% validation error.
6 | 1511.06434#20 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | In recent years, supervised learning with convolutional networks (CNNs) has
seen huge adoption in computer vision applications. Comparatively, unsupervised
learning with CNNs has received less attention. In this work we hope to help
bridge the gap between the success of CNNs for supervised learning and
unsupervised learning. We introduce a class of CNNs called deep convolutional
generative adversarial networks (DCGANs), that have certain architectural
constraints, and demonstrate that they are a strong candidate for unsupervised
learning. Training on various image datasets, we show convincing evidence that
our deep convolutional adversarial pair learns a hierarchy of representations
from object parts to scenes in both the generator and discriminator.
Additionally, we use the learned features for novel tasks - demonstrating their
applicability as general image representations. | http://arxiv.org/pdf/1511.06434 | Alec Radford, Luke Metz, Soumith Chintala | cs.LG, cs.CV | Under review as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1505.00853"
},
{
"id": "1502.03167"
},
{
"id": "1502.04623"
},
{
"id": "1506.02351"
},
{
"id": "1506.03365"
},
{
"id": "1509.01240"
},
{
"id": "1503.03585"
},
{
"id": "1511.01844"
},
{
"id": "1506.05751"
},
{
"id": "1507.02672"
},
{
"id": "1510.02795"
}
] |
1511.06279 | 21 | # 4.1 TASK AND ENVIRONMENT DESCRIPTIONS
In this section we provide an overview of the tasks used to evaluate our model. Table 2 in the appendix provides a full listing of all the programs and subprograms learned by our model.
# ADDITION
The task in this environment is to read in the digits of two base-10 numbers and produce the digits of the answer. Our goal is to teach the model the standard (at least in the US) grade school algorithm of adding, in which one works from right to left applying single-digit add and carry operations.
In this environment, the network is endowed with a âscratch padâ with which to store intermediate computations; e.g. to record carries. There are four pointers; one for each of the two input numbers, one for the carry, and another to write the output. At each time step, a pointer can be moved left or right, or it can record a value to the pad. Figure 3a illustrates the environment of this model, and Figure 3b provides a real execution trace generated by our model. | 1511.06279#21 | Neural Programmer-Interpreters | We propose the neural programmer-interpreter (NPI): a recurrent and
compositional neural network that learns to represent and execute programs. NPI
has three learnable components: a task-agnostic recurrent core, a persistent
key-value program memory, and domain-specific encoders that enable a single NPI
to operate in multiple perceptually diverse environments with distinct
affordances. By learning to compose lower-level programs to express
higher-level programs, NPI reduces sample complexity and increases
generalization ability compared to sequence-to-sequence LSTMs. The program
memory allows efficient learning of additional tasks by building on existing
programs. NPI can also harness the environment (e.g. a scratch pad with
read-write pointers) to cache intermediate results of computation, lessening
the long-term memory burden on recurrent hidden units. In this work we train
the NPI with fully-supervised execution traces; each program has example
sequences of calls to the immediate subprograms conditioned on the input.
Rather than training on a huge number of relatively weak labels, NPI learns
from a small number of rich examples. We demonstrate the capability of our
model to learn several types of compositional programs: addition, sorting, and
canonicalizing 3D models. Furthermore, a single NPI learns to execute these
programs and all 21 associated subprograms. | http://arxiv.org/pdf/1511.06279 | Scott Reed, Nando de Freitas | cs.LG, cs.NE | ICLR 2016 conference submission | null | cs.LG | 20151119 | 20160229 | [
{
"id": "1511.04834"
},
{
"id": "1505.00521"
},
{
"id": "1511.08228"
},
{
"id": "1511.07275"
},
{
"id": "1511.06392"
}
] |
1511.06297 | 21 | Finally we tested our model on the Street View House Numbers (SVHN) (Netzer et al., 2011) dataset, which also yielded encouraging results (ï¬gure 3). As we restrain the capacity of the models (by increasing sparsity or decreasing number of units), condnets retain acceptable performance with low run times, while plain neural networks suffer highly (their performance dramatically decreases with lower run times). The best condnet model has a test error of 7.3%, and runs a validation epoch in 10s (14s without speed optimization), while the best standard neural network model has a test error of 9.1%, and runs in 16s. Note that the variance in the SVHN results (ï¬gure 3) is due to the mostly random hyperparameter exploration, where block size, number of blocks, Ï , λv, λs, as well of learning rates are randomly picked. The normal neural network results were obtained by varying the number of hidden units of a 2-hidden-layer model.
For all three datasets and all condnet models used, the required training time was higher, but still reasonable. On average experiments took 1.5 to 3 times longer (wall time).
# 4.3 EFFECTS OF REGULARIZATION | 1511.06297#21 | Conditional Computation in Neural Networks for faster models | Deep learning has become the state-of-art tool in many applications, but the
evaluation and training of deep models can be time-consuming and
computationally expensive. The conditional computation approach has been
proposed to tackle this problem (Bengio et al., 2013; Davis & Arel, 2013). It
operates by selectively activating only parts of the network at a time. In this
paper, we use reinforcement learning as a tool to optimize conditional
computation policies. More specifically, we cast the problem of learning
activation-dependent policies for dropping out blocks of units as a
reinforcement learning problem. We propose a learning scheme motivated by
computation speed, capturing the idea of wanting to have parsimonious
activations while maintaining prediction accuracy. We apply a policy gradient
algorithm for learning policies that optimize this loss function and propose a
regularization mechanism that encourages diversification of the dropout policy.
We present encouraging empirical results showing that this approach improves
the speed of computation without impacting the quality of the approximation. | http://arxiv.org/pdf/1511.06297 | Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, Doina Precup | cs.LG | ICLR 2016 submission, revised | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1502.01852"
},
{
"id": "1502.04623"
},
{
"id": "1502.03044"
}
] |
1511.06342 | 21 | The above linear growth rate of the cost-to-go is achieved through sampling from AMN action output ÏAMN, while the cost grows quadratically if the algorithm only samples from the expert action output. Our empirical observations conï¬rm this theoretical prediction.
# 5 EXPERIMENTS
In the following experiments, we validate the Actor-Mimic method by demonstrating its effective- ness at both multitask and transfer learning in the Arcade Learning Environment (ALE). For our experiments, we use subsets of a collection of 20 Atari games. 19 games of this set were among the 29 games that the DQN method performed at a super-human level. We additionally chose 1 game, the game of Seaquest, on which the DQN had performed poorly when compared to a human expert. Details on the training procedure are described in Appendix B.
5.1 MULTITASK To ï¬rst evaluate the actor-mimic objective on multitask learning, we demonstrate the effectiveness of training an AMN over multiple games simultaneously. In this particular case, since our focus is
5
Published as a conference paper at ICLR 2016 | 1511.06342#21 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06434 | 21 | 6
# INVESTIGATING AND VISUALIZING THE INTERNALS OF THE NETWORKS
We investigate the trained generators and discriminators in a variety of ways. We do not do any kind of nearest neighbor search on the training set. Nearest neighbors in pixel or feature space are
6
# Under review as a conference paper at ICLR 2016
Table 2: SVHN classiï¬cation with 1000 labels
Model KNN TSVM M1+KNN M1+TSVM M1+M2 SWWAE without dropout SWWAE with dropout DCGAN (ours) + L2-SVM Supervised CNN with the same architecture error rate 77.93% 66.55% 65.63% 54.33% 36.02% 27.83% 23.56% 22.48% 28.87% (validation)
trivially fooled (Theis et al., 2015) by small image transforms. We also do not use log-likelihood metrics to quantitatively assess the model, as it is a poor (Theis et al., 2015) metric.
6.1 WALKING IN THE LATENT SPACE | 1511.06434#21 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | In recent years, supervised learning with convolutional networks (CNNs) has
seen huge adoption in computer vision applications. Comparatively, unsupervised
learning with CNNs has received less attention. In this work we hope to help
bridge the gap between the success of CNNs for supervised learning and
unsupervised learning. We introduce a class of CNNs called deep convolutional
generative adversarial networks (DCGANs), that have certain architectural
constraints, and demonstrate that they are a strong candidate for unsupervised
learning. Training on various image datasets, we show convincing evidence that
our deep convolutional adversarial pair learns a hierarchy of representations
from object parts to scenes in both the generator and discriminator.
Additionally, we use the learned features for novel tasks - demonstrating their
applicability as general image representations. | http://arxiv.org/pdf/1511.06434 | Alec Radford, Luke Metz, Soumith Chintala | cs.LG, cs.CV | Under review as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1505.00853"
},
{
"id": "1502.03167"
},
{
"id": "1502.04623"
},
{
"id": "1506.02351"
},
{
"id": "1506.03365"
},
{
"id": "1509.01240"
},
{
"id": "1503.03585"
},
{
"id": "1511.01844"
},
{
"id": "1506.05751"
},
{
"id": "1507.02672"
},
{
"id": "1510.02795"
}
] |
1511.06279 | 22 | For the state encoder fenc, the model is allowed a view of the scratch pad from the perspective of each of the four pointers. That is, the model sees the current values at pointer locations of the two inputs, the carry row and the output row, as 1-of-K encodings, where K is 10 because we are working in base 10. We also append the values of the input argument tuple at:
fenc(Q, i1, i2, i3, i4, at) = M LP ([Q(1, i1), Q(2, i2), Q(3, i3), Q(4, i4), at(1), at(2), at(3)]) (9) where Q â R4ÃN ÃK, and i1, ..., i4 are pointers, one per scratch pad row. The ï¬rst dimension of Q corresponds to scratch pad rows, N is the number of columns (digits) and K is the one-hot encoding dimension. To begin the ADD program, we set the initial arguments to a default value and initialize all pointers to be at the rightmost column. The only subprogram with non-default arguments is ACT, in which case the arguments indicate an action to be taken by a speciï¬ed pointer.
# SORTING | 1511.06279#22 | Neural Programmer-Interpreters | We propose the neural programmer-interpreter (NPI): a recurrent and
compositional neural network that learns to represent and execute programs. NPI
has three learnable components: a task-agnostic recurrent core, a persistent
key-value program memory, and domain-specific encoders that enable a single NPI
to operate in multiple perceptually diverse environments with distinct
affordances. By learning to compose lower-level programs to express
higher-level programs, NPI reduces sample complexity and increases
generalization ability compared to sequence-to-sequence LSTMs. The program
memory allows efficient learning of additional tasks by building on existing
programs. NPI can also harness the environment (e.g. a scratch pad with
read-write pointers) to cache intermediate results of computation, lessening
the long-term memory burden on recurrent hidden units. In this work we train
the NPI with fully-supervised execution traces; each program has example
sequences of calls to the immediate subprograms conditioned on the input.
Rather than training on a huge number of relatively weak labels, NPI learns
from a small number of rich examples. We demonstrate the capability of our
model to learn several types of compositional programs: addition, sorting, and
canonicalizing 3D models. Furthermore, a single NPI learns to execute these
programs and all 21 associated subprograms. | http://arxiv.org/pdf/1511.06279 | Scott Reed, Nando de Freitas | cs.LG, cs.NE | ICLR 2016 conference submission | null | cs.LG | 20151119 | 20160229 | [
{
"id": "1511.04834"
},
{
"id": "1505.00521"
},
{
"id": "1511.08228"
},
{
"id": "1511.07275"
},
{
"id": "1511.06392"
}
] |
1511.06297 | 22 | # 4.3 EFFECTS OF REGULARIZATION
The added regularization proposed in section 3.2 seems to play an important role in our ability to train the conditional model. When using only the prediction score, we observed that the algorithm tried to compensate by recruiting more units and saturating their participation probability, or even failed by dismissing very early what were probably considered bad units. In practice, the variance regularization term Lv only slightly affects the prediction accuracy and learned policies of models, but we have observed that it signiï¬cantly speeds up the training process, probably by encouraging policies to become less uniform earlier in the learning process. This can
6
# Under review as a conference paper at ICLR 2016
(a) (b)
Figure 5: CIFAR-10, (a) each pair of circle and triangle is an experiment made with a given lambda (x axis), resulting in a model with a certain error and running time (y axes). As λs increases the running time decreases, but so does performance. (b) The same model is being trained with different values of λv. Redder means lower, greener means higher.
be seen in ï¬gure 5b, where we train a model with different values of λv. When λv is increased, the ï¬rst few epochs have a much lower error rate. | 1511.06297#22 | Conditional Computation in Neural Networks for faster models | Deep learning has become the state-of-art tool in many applications, but the
evaluation and training of deep models can be time-consuming and
computationally expensive. The conditional computation approach has been
proposed to tackle this problem (Bengio et al., 2013; Davis & Arel, 2013). It
operates by selectively activating only parts of the network at a time. In this
paper, we use reinforcement learning as a tool to optimize conditional
computation policies. More specifically, we cast the problem of learning
activation-dependent policies for dropping out blocks of units as a
reinforcement learning problem. We propose a learning scheme motivated by
computation speed, capturing the idea of wanting to have parsimonious
activations while maintaining prediction accuracy. We apply a policy gradient
algorithm for learning policies that optimize this loss function and propose a
regularization mechanism that encourages diversification of the dropout policy.
We present encouraging empirical results showing that this approach improves
the speed of computation without impacting the quality of the approximation. | http://arxiv.org/pdf/1511.06297 | Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, Doina Precup | cs.LG | ICLR 2016 submission, revised | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1502.01852"
},
{
"id": "1502.04623"
},
{
"id": "1502.03044"
}
] |
1511.06342 | 22 | 5
Published as a conference paper at ICLR 2016
Network DQN AMN 100% Ã AMN DQN Mean Max Mean Max Mean Max Atlantis Boxing Breakout Crazy Climber 57279 81.47 273.15 541000 88.02 377.96 165065 76.264 347.01 370.32 81.860 584196 288.2% 93.61% 127.0% 108.0% 93.00% 97.98% 96189 117593 57070 74342 59.33% 63.22% Enduro Pong Seaquest 457.60 19.581 4278.9 808.00 20.140 6200.5 499.3 15.275 1177.3 1466.0 18.780 686.77 109.1% 78.01% 27.51% 85.00% 93.25% 23.64% Space Invaders 1669.2 2109.7 1142.4 1349.0 68.44% 63.94% | 1511.06342#22 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06434 | 22 | 6.1 WALKING IN THE LATENT SPACE
The ï¬rst experiment we did was to understand the landscape of the latent space. Walking on the manifold that is learnt can usually tell us about signs of memorization (if there are sharp transitions) and about the way in which the space is hierarchically collapsed. If walking in this latent space results in semantic changes to the image generations (such as objects being added and removed), we can reason that the model has learned relevant and interesting representations. The results are shown in Fig.4.
6.2 VISUALIZING THE DISCRIMINATOR FEATURES
Previous work has demonstrated that supervised training of CNNs on large image datasets results in very powerful learned features (Zeiler & Fergus, 2014). Additionally, supervised CNNs trained on scene classiï¬cation learn object detectors (Oquab et al., 2014). We demonstrate that an unsupervised DCGAN trained on a large image dataset can also learn a hierarchy of features that are interesting. Using guided backpropagation as proposed by (Springenberg et al., 2014), we show in Fig.5 that the features learnt by the discriminator activate on typical parts of a bedroom, like beds and windows. For comparison, in the same ï¬gure, we give a baseline for randomly initialized features that are not activated on anything that is semantically relevant or interesting. | 1511.06434#22 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | In recent years, supervised learning with convolutional networks (CNNs) has
seen huge adoption in computer vision applications. Comparatively, unsupervised
learning with CNNs has received less attention. In this work we hope to help
bridge the gap between the success of CNNs for supervised learning and
unsupervised learning. We introduce a class of CNNs called deep convolutional
generative adversarial networks (DCGANs), that have certain architectural
constraints, and demonstrate that they are a strong candidate for unsupervised
learning. Training on various image datasets, we show convincing evidence that
our deep convolutional adversarial pair learns a hierarchy of representations
from object parts to scenes in both the generator and discriminator.
Additionally, we use the learned features for novel tasks - demonstrating their
applicability as general image representations. | http://arxiv.org/pdf/1511.06434 | Alec Radford, Luke Metz, Soumith Chintala | cs.LG, cs.CV | Under review as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1505.00853"
},
{
"id": "1502.03167"
},
{
"id": "1502.04623"
},
{
"id": "1506.02351"
},
{
"id": "1506.03365"
},
{
"id": "1509.01240"
},
{
"id": "1503.03585"
},
{
"id": "1511.01844"
},
{
"id": "1506.05751"
},
{
"id": "1507.02672"
},
{
"id": "1510.02795"
}
] |
1511.06279 | 23 | # SORTING
In this section we apply our model to a setting with potentially much longer execution traces: sorting an array of numbers using bubblesort. As in the case of addition we can use a scratch pad to store intermediate states of the array. We deï¬ne the encoder as follows:
fenc(Q, i1, i2, at) = M LP ([Q(1, i1), Q(1, i2), at(1), at(2), at(3)]) (10)
6
Published as a conference paper at ICLR 2016
Figure 4: Illustration of the sorting environment used in our experiments.
(a) Example scratch pad and pointers used for sorting. Several steps of the BUBBLE subprogram are shown. (b) Excerpt from the trace of the learned bubblesort program.
where Q â R1ÃN ÃK is the pad, N is the array length and K is the array entry embedding dimension. Figure 4 shows an example series of array states and an excerpt of an execution trace.
# CANONICALIZING 3D MODELS | 1511.06279#23 | Neural Programmer-Interpreters | We propose the neural programmer-interpreter (NPI): a recurrent and
compositional neural network that learns to represent and execute programs. NPI
has three learnable components: a task-agnostic recurrent core, a persistent
key-value program memory, and domain-specific encoders that enable a single NPI
to operate in multiple perceptually diverse environments with distinct
affordances. By learning to compose lower-level programs to express
higher-level programs, NPI reduces sample complexity and increases
generalization ability compared to sequence-to-sequence LSTMs. The program
memory allows efficient learning of additional tasks by building on existing
programs. NPI can also harness the environment (e.g. a scratch pad with
read-write pointers) to cache intermediate results of computation, lessening
the long-term memory burden on recurrent hidden units. In this work we train
the NPI with fully-supervised execution traces; each program has example
sequences of calls to the immediate subprograms conditioned on the input.
Rather than training on a huge number of relatively weak labels, NPI learns
from a small number of rich examples. We demonstrate the capability of our
model to learn several types of compositional programs: addition, sorting, and
canonicalizing 3D models. Furthermore, a single NPI learns to execute these
programs and all 21 associated subprograms. | http://arxiv.org/pdf/1511.06279 | Scott Reed, Nando de Freitas | cs.LG, cs.NE | ICLR 2016 conference submission | null | cs.LG | 20151119 | 20160229 | [
{
"id": "1511.04834"
},
{
"id": "1505.00521"
},
{
"id": "1511.08228"
},
{
"id": "1511.07275"
},
{
"id": "1511.06392"
}
] |
1511.06297 | 23 | It is possible to tune some hyperparameters to affect the point at which the trade-off between com- putation speed and performance lies, thus one could push the error downwards at the expense of also more computation time. This is suggested by ï¬gure 5a, which shows the effect of one such hyperparameter (λs) on both running times and performance for the CIFAR dataset. Here it seems that λ â¼ [300, 400] offers the best trade-off, yet other values could be selected, depending on the speciï¬c requirements of an application.
# 5 RELATED WORK
Ba & Frey (2013) proposed a learning algorithm called standout for computing an input-dependent dropout distribution at every node. As opposed to our layer-wise method, standout computes a one- shot dropout mask over the entire network, conditioned on the input to the network. Additionally, masks are unit-wise, while our approach uses masks that span blocks of units. Bengio et al. (2013) introduced Stochastic Times Smooth neurons as gaters for conditional computation within a deep neural network. STS neurons are highly non-linear and non-differentiable functions learned using estimators of the gradient obtained through REINFORCE. They allow a sparse binary gater to be computed as a function of the input, thus reducing computations in the then sparse activation of hidden layers. | 1511.06297#23 | Conditional Computation in Neural Networks for faster models | Deep learning has become the state-of-art tool in many applications, but the
evaluation and training of deep models can be time-consuming and
computationally expensive. The conditional computation approach has been
proposed to tackle this problem (Bengio et al., 2013; Davis & Arel, 2013). It
operates by selectively activating only parts of the network at a time. In this
paper, we use reinforcement learning as a tool to optimize conditional
computation policies. More specifically, we cast the problem of learning
activation-dependent policies for dropping out blocks of units as a
reinforcement learning problem. We propose a learning scheme motivated by
computation speed, capturing the idea of wanting to have parsimonious
activations while maintaining prediction accuracy. We apply a policy gradient
algorithm for learning policies that optimize this loss function and propose a
regularization mechanism that encourages diversification of the dropout policy.
We present encouraging empirical results showing that this approach improves
the speed of computation without impacting the quality of the approximation. | http://arxiv.org/pdf/1511.06297 | Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, Doina Precup | cs.LG | ICLR 2016 submission, revised | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1502.01852"
},
{
"id": "1502.04623"
},
{
"id": "1502.03044"
}
] |
1511.06342 | 23 | Table 1: Actor-Mimic results on a set of eight Atari games. We compare the AMN performance to that of the expert DQNs trained separately on each game. The expert DQNs were trained until convergence and the AMN was trained for 100 training epochs, which is equivalent to 25 million input frames per source game. For the AMN, we report maximum test reward ever achieved in epochs 1-100 and mean test reward in epochs 91-100. For the DQN, we report maximum test reward ever achieved until convergence and mean test reward in the last 10 epochs of DQN training. Additionally, at the last row of the table we report the percentage ratio of the AMN reward to the expert DQN reward for every game for both mean and max rewards. These percentage ratios are plotted in Figure 6. The AMN results are averaged over 2 separately trained networks. | 1511.06342#23 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06434 | 23 | 6.3 MANIPULATING THE GENERATOR REPRESENTATION
6.3.1 FORGETTING TO DRAW CERTAIN OBJECTS
In addition to the representations learnt by a discriminator, there is the question of what representa- tions the generator learns. The quality of samples suggest that the generator learns speciï¬c object representations for major scene components such as beds, windows, lamps, doors, and miscellaneous furniture. In order to explore the form that these representations take, we conducted an experiment to attempt to remove windows from the generator completely.
On 150 samples, 52 window bounding boxes were drawn manually. On the second highest con- volution layer features, logistic regression was ï¬t to predict whether a feature activation was on a window (or not), by using the criterion that activations inside the drawn bounding boxes are posi- tives and random samples from the same images are negatives. Using this simple model, all feature maps with weights greater than zero ( 200 in total) were dropped from all spatial locations. Then, random new samples were generated with and without the feature map removal.
The generated images with and without the window dropout are shown in Fig.6, and interestingly, the network mostly forgets to draw windows in the bedrooms, replacing them with other objects.
7
# Under review as a conference paper at ICLR 2016 | 1511.06434#23 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | In recent years, supervised learning with convolutional networks (CNNs) has
seen huge adoption in computer vision applications. Comparatively, unsupervised
learning with CNNs has received less attention. In this work we hope to help
bridge the gap between the success of CNNs for supervised learning and
unsupervised learning. We introduce a class of CNNs called deep convolutional
generative adversarial networks (DCGANs), that have certain architectural
constraints, and demonstrate that they are a strong candidate for unsupervised
learning. Training on various image datasets, we show convincing evidence that
our deep convolutional adversarial pair learns a hierarchy of representations
from object parts to scenes in both the generator and discriminator.
Additionally, we use the learned features for novel tasks - demonstrating their
applicability as general image representations. | http://arxiv.org/pdf/1511.06434 | Alec Radford, Luke Metz, Soumith Chintala | cs.LG, cs.CV | Under review as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1505.00853"
},
{
"id": "1502.03167"
},
{
"id": "1502.04623"
},
{
"id": "1506.02351"
},
{
"id": "1506.03365"
},
{
"id": "1509.01240"
},
{
"id": "1503.03585"
},
{
"id": "1511.01844"
},
{
"id": "1506.05751"
},
{
"id": "1507.02672"
},
{
"id": "1510.02795"
}
] |
1511.06279 | 24 | # CANONICALIZING 3D MODELS
We also apply our model to a vision task with a very different perceptual environment - pixels. Given a rendering of a 3D car, we would like to learn a visual program that âcanonicalizesâ the model with respect to its pose. Whatever the starting position, the program should generate a trajectory of actions that delivers the camera to the target view, e.g. frontal pose at a 15⦠elevation. For training data, we used renderings of the 3D car CAD models from (Fidler et al., 2012).
This is a nontrivial problem because different starting positions will require quite different trajec- tories to reach the target. Further complicating the problem is the fact that the model will need to generalize to different car models than it saw during training.
We again use a scratch pad, but here it is a very simple read-only pad that only contains a target camera elevation and azimuth â i.e., the âcanonical poseâ. Since observations come in the form of image pixels, we use a convolutional neural network fCN N as the image encoder:
fenc(Q, x, i1, i2, at) = M LP ([Q(1, i1), Q(2, i2), fCN N (x), at(1), at(2), at(3)]) | 1511.06279#24 | Neural Programmer-Interpreters | We propose the neural programmer-interpreter (NPI): a recurrent and
compositional neural network that learns to represent and execute programs. NPI
has three learnable components: a task-agnostic recurrent core, a persistent
key-value program memory, and domain-specific encoders that enable a single NPI
to operate in multiple perceptually diverse environments with distinct
affordances. By learning to compose lower-level programs to express
higher-level programs, NPI reduces sample complexity and increases
generalization ability compared to sequence-to-sequence LSTMs. The program
memory allows efficient learning of additional tasks by building on existing
programs. NPI can also harness the environment (e.g. a scratch pad with
read-write pointers) to cache intermediate results of computation, lessening
the long-term memory burden on recurrent hidden units. In this work we train
the NPI with fully-supervised execution traces; each program has example
sequences of calls to the immediate subprograms conditioned on the input.
Rather than training on a huge number of relatively weak labels, NPI learns
from a small number of rich examples. We demonstrate the capability of our
model to learn several types of compositional programs: addition, sorting, and
canonicalizing 3D models. Furthermore, a single NPI learns to execute these
programs and all 21 associated subprograms. | http://arxiv.org/pdf/1511.06279 | Scott Reed, Nando de Freitas | cs.LG, cs.NE | ICLR 2016 conference submission | null | cs.LG | 20151119 | 20160229 | [
{
"id": "1511.04834"
},
{
"id": "1505.00521"
},
{
"id": "1511.08228"
},
{
"id": "1511.07275"
},
{
"id": "1511.06392"
}
] |
1511.06297 | 24 | Stollenga et al. (2014) recently proposed to learn a sequential decision process over the ï¬lters of a convolutional neural network (CNN). As in our work, a direct policy search method was chosen to ï¬nd the parameters of a control policy. Their problem formulation differs from ours mainly in the notion of decision âstageâ. In their model, an input is ï¬rst fed through a network, the activations are computed during forward propagation then they are served to the next decision stage. The goal of the policy is to select relevant ï¬lters from the previous stage so as to improve the decision accuracy on the current example. They also use a gradient-free evolutionary algorithm, in contrast to our gradient-based method.
The Deep Sequential Neural Network (DSNN) model of Denoyer & Gallinari (2014) is possibly closest to our approach. The control process is carried over the layers of the network and uses the output of the previous layer to compute actions. The REINFORCE algorithm is used to train the pol- icy with the reward/cost function being deï¬ned as the loss at the output in the base network. DSNN considers the general problem of choosing between between different type of mappings (weights) in
7
# Under review as a conference paper at ICLR 2016 | 1511.06297#24 | Conditional Computation in Neural Networks for faster models | Deep learning has become the state-of-art tool in many applications, but the
evaluation and training of deep models can be time-consuming and
computationally expensive. The conditional computation approach has been
proposed to tackle this problem (Bengio et al., 2013; Davis & Arel, 2013). It
operates by selectively activating only parts of the network at a time. In this
paper, we use reinforcement learning as a tool to optimize conditional
computation policies. More specifically, we cast the problem of learning
activation-dependent policies for dropping out blocks of units as a
reinforcement learning problem. We propose a learning scheme motivated by
computation speed, capturing the idea of wanting to have parsimonious
activations while maintaining prediction accuracy. We apply a policy gradient
algorithm for learning policies that optimize this loss function and propose a
regularization mechanism that encourages diversification of the dropout policy.
We present encouraging empirical results showing that this approach improves
the speed of computation without impacting the quality of the approximation. | http://arxiv.org/pdf/1511.06297 | Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, Doina Precup | cs.LG | ICLR 2016 submission, revised | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1502.01852"
},
{
"id": "1502.04623"
},
{
"id": "1502.03044"
}
] |
1511.06342 | 24 | on multitask learning and not transfer learning, we disregard the feature regression objective and set β to 0. Figure 1 and Table 1 show the results of an AMN trained on 8 games simultaneously with the policy regression objective, compared to an expert DQN trained separately for each game. The AMN and every individual expert DQN in this case had the exact same network architecture. We can see that the AMN quickly reaches close-to-expert performance on 7 games out of 8, only taking around 20 epochs or 5 million training frames to settle to a stable behaviour. This is in comparison to the expert networks, which were trained for up to 50 million frames. | 1511.06342#24 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06434 | 24 | Figure 4: Top rows: Interpolation between a series of 9 random points in Z show that the space learned has smooth transitions, with every image in the space plausibly looking like a bedroom. In the 6th row, you see a room without a window slowly transforming into a room with a giant window. In the 10th row, you see what appears to be a TV slowly being transformed into a window.
# 6.3.2 VECTOR ARITHMETIC ON FACE SAMPLES
In the context of evaluating learned representations of words (Mikolov et al., 2013) demonstrated that simple arithmetic operations revealed rich linear structure in representation space. One canoni- cal example demonstrated that the vector(âKingâ) - vector(âManâ) + vector(âWomanâ) resulted in a vector whose nearest neighbor was the vector for Queen. We investigated whether similar structure emerges in the Z representation of our generators. We performed similar arithmetic on the Z vectors of sets of exemplar samples for visual concepts. Experiments working on only single samples per concept were unstable, but averaging the Z vector for three examplars showed consistent and stable generations that semantically obeyed the arithmetic. In addition to the object manipulation shown in (Fig. 7), we demonstrate that face pose is also modeled linearly in Z space (Fig. 8). | 1511.06434#24 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | In recent years, supervised learning with convolutional networks (CNNs) has
seen huge adoption in computer vision applications. Comparatively, unsupervised
learning with CNNs has received less attention. In this work we hope to help
bridge the gap between the success of CNNs for supervised learning and
unsupervised learning. We introduce a class of CNNs called deep convolutional
generative adversarial networks (DCGANs), that have certain architectural
constraints, and demonstrate that they are a strong candidate for unsupervised
learning. Training on various image datasets, we show convincing evidence that
our deep convolutional adversarial pair learns a hierarchy of representations
from object parts to scenes in both the generator and discriminator.
Additionally, we use the learned features for novel tasks - demonstrating their
applicability as general image representations. | http://arxiv.org/pdf/1511.06434 | Alec Radford, Luke Metz, Soumith Chintala | cs.LG, cs.CV | Under review as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1505.00853"
},
{
"id": "1502.03167"
},
{
"id": "1502.04623"
},
{
"id": "1506.02351"
},
{
"id": "1506.03365"
},
{
"id": "1509.01240"
},
{
"id": "1503.03585"
},
{
"id": "1511.01844"
},
{
"id": "1506.05751"
},
{
"id": "1507.02672"
},
{
"id": "1510.02795"
}
] |
1511.06279 | 25 | where x â RHÃW Ã3 is a car rendering at the current pose, Q â R2Ã1ÃK is the pad containing canonical azimuth and elevation, i1, i2 are the (ï¬xed at 1) pointer locations, and K is the one-hot encoding dimension of pose coordinates. We set K = 24 corresponding to 15⦠pose increments.
Note, critically, that our NPI model only has access to pixels of the rendering and the target pose, and is not provided the pose of query frames. We are also aware that one solution to this problem would be to train a pose classiï¬er network and then ï¬nd the shortest path to canonical pose via classical methods. That is also a sensible approach. However, our purpose here is to show that our method generalizes beyond the scratch pad domain to detailed images of 3D objects, and also to other environments with a single multi-task model.
# 4.2 SAMPLE COMPLEXITY AND GENERALIZATION
Both LSTMs and Neural Turing Machines can learn to perform sorting to a limited degree, although they have not been shown to generalize well to much longer arrays than were seen during training. However, we are interested not only in whether sorting can be accomplished, but whether a particular sorting algorithm (e.g. bubblesort) can be learned by the model, and how effectively in terms of sample complexity and generalization. | 1511.06279#25 | Neural Programmer-Interpreters | We propose the neural programmer-interpreter (NPI): a recurrent and
compositional neural network that learns to represent and execute programs. NPI
has three learnable components: a task-agnostic recurrent core, a persistent
key-value program memory, and domain-specific encoders that enable a single NPI
to operate in multiple perceptually diverse environments with distinct
affordances. By learning to compose lower-level programs to express
higher-level programs, NPI reduces sample complexity and increases
generalization ability compared to sequence-to-sequence LSTMs. The program
memory allows efficient learning of additional tasks by building on existing
programs. NPI can also harness the environment (e.g. a scratch pad with
read-write pointers) to cache intermediate results of computation, lessening
the long-term memory burden on recurrent hidden units. In this work we train
the NPI with fully-supervised execution traces; each program has example
sequences of calls to the immediate subprograms conditioned on the input.
Rather than training on a huge number of relatively weak labels, NPI learns
from a small number of rich examples. We demonstrate the capability of our
model to learn several types of compositional programs: addition, sorting, and
canonicalizing 3D models. Furthermore, a single NPI learns to execute these
programs and all 21 associated subprograms. | http://arxiv.org/pdf/1511.06279 | Scott Reed, Nando de Freitas | cs.LG, cs.NE | ICLR 2016 conference submission | null | cs.LG | 20151119 | 20160229 | [
{
"id": "1511.04834"
},
{
"id": "1505.00521"
},
{
"id": "1511.08228"
},
{
"id": "1511.07275"
},
{
"id": "1511.06392"
}
] |
1511.06297 | 25 | 7
# Under review as a conference paper at ICLR 2016
a composition of functions. However, they test their model on datasets in which different modes are proeminent, making it easy for a policy to distinguish between them.
Another point of comparison for our work are attention models (Mnih et al., 2014; Gregor et al., 2015; Xu et al., 2015). These models typically learn a policy, or a form of policy, that allows them to selectively attend to parts of their input sequentially, in a visual 2D environnement. Both attention and our approach aim to reduce computation times. While attention aims to perform dense compu- tations on subsets of the inputs, our approach aims to be more general, since the policy focuses on subsets of the whole computation (it is in a sense more distributed). It should also be possible to combine these approaches, since one acts on the input space and the other acts on the representa- tion space, altough the resulting policies would be much more complex, and not necessarily easily trainable.
# 6 CONCLUSION | 1511.06297#25 | Conditional Computation in Neural Networks for faster models | Deep learning has become the state-of-art tool in many applications, but the
evaluation and training of deep models can be time-consuming and
computationally expensive. The conditional computation approach has been
proposed to tackle this problem (Bengio et al., 2013; Davis & Arel, 2013). It
operates by selectively activating only parts of the network at a time. In this
paper, we use reinforcement learning as a tool to optimize conditional
computation policies. More specifically, we cast the problem of learning
activation-dependent policies for dropping out blocks of units as a
reinforcement learning problem. We propose a learning scheme motivated by
computation speed, capturing the idea of wanting to have parsimonious
activations while maintaining prediction accuracy. We apply a policy gradient
algorithm for learning policies that optimize this loss function and propose a
regularization mechanism that encourages diversification of the dropout policy.
We present encouraging empirical results showing that this approach improves
the speed of computation without impacting the quality of the approximation. | http://arxiv.org/pdf/1511.06297 | Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, Doina Precup | cs.LG | ICLR 2016 submission, revised | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1502.01852"
},
{
"id": "1502.04623"
},
{
"id": "1502.03044"
}
] |
1511.06342 | 25 | One result that was observed during training is that the AMN often becomes more consistent in its behaviour than the expert DQN, with a noticeably lower reward variance in every game except Atlantis and Pong. Another surprising result is that the AMN achieves a signiï¬cantly higher mean reward in the game of Atlantis and relatively higher mean reward in the games of Breakout and Enduro. This is despite the fact that the AMN is not being optimized to improve reward over the expert but just replicate the expertâs behaviour. We also observed this increase in source task perfor- mance again when we later on increased the AMN model complexity for the transfer experiments (see Atlantis experiments in Appendix D). The AMN had the worst performance on the game of Seaquest, which was a game on which the expert DQN itself did not do very well. It is possible that a low quality expert policy has difï¬culty teaching the AMN to even replicate its own (poor) behaviour. We compare the performance of our AMN against a baseline of two different multitask DQN architectures in Appendix C. | 1511.06342#25 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06434 | 25 | These demonstrations suggest interesting applications can be developed using Z representations learned by our models. It has been previously demonstrated that conditional generative models can learn to convincingly model object attributes like scale, rotation, and position (Dosovitskiy et al., 2014). This is to our knowledge the ï¬rst demonstration of this occurring in purely unsupervised
8
# Under review as a conference paper at ICLR 2016
Random filters Trained filters
Figure 5: On the right, guided backpropagation visualizations of maximal axis-aligned responses for the ï¬rst 6 learned convolutional features from the last convolution layer in the discriminator. Notice a signiï¬cant minority of features respond to beds - the central object in the LSUN bedrooms dataset. On the left is a random ï¬lter baseline. Comparing to the previous responses there is little to no discrimination and random structure. | 1511.06434#25 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | In recent years, supervised learning with convolutional networks (CNNs) has
seen huge adoption in computer vision applications. Comparatively, unsupervised
learning with CNNs has received less attention. In this work we hope to help
bridge the gap between the success of CNNs for supervised learning and
unsupervised learning. We introduce a class of CNNs called deep convolutional
generative adversarial networks (DCGANs), that have certain architectural
constraints, and demonstrate that they are a strong candidate for unsupervised
learning. Training on various image datasets, we show convincing evidence that
our deep convolutional adversarial pair learns a hierarchy of representations
from object parts to scenes in both the generator and discriminator.
Additionally, we use the learned features for novel tasks - demonstrating their
applicability as general image representations. | http://arxiv.org/pdf/1511.06434 | Alec Radford, Luke Metz, Soumith Chintala | cs.LG, cs.CV | Under review as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1505.00853"
},
{
"id": "1502.03167"
},
{
"id": "1502.04623"
},
{
"id": "1506.02351"
},
{
"id": "1506.03365"
},
{
"id": "1509.01240"
},
{
"id": "1503.03585"
},
{
"id": "1511.01844"
},
{
"id": "1506.05751"
},
{
"id": "1507.02672"
},
{
"id": "1510.02795"
}
] |
1511.06279 | 26 | We compare the generalization ability of our model to a ï¬at sequence-to-sequence LSTM (Sutskever et al., 2014), using the same number of layers (2) and hidden units (256). Note that a ï¬at 2 version of NPI could also learn sorting of short arrays, but because bubblesort runs in O(N 2) for arrays of length N , the execution traces quickly become far too long to store the required number of LSTM states in memory. Our NPI architecture can train on much larger arrays by exploiting compositional structure; the memory requirements of any given subprogram can be restricted to O(N ).
2By ï¬at in this case, we mean non-compositional, not making use of subprograms, and only making calls to ACT in order to swap values and move pointers.
7
Published as a conference paper at ICLR 2016
Sorting per-sequence accuracy vs. # training examples *â2â_+_e a i TT # Training examples he Seq?Seq â® NPI
Sorting per-sequence accuracy vs sequence length r. + oe 2 ¢ 100 I I I Training l 50 sequence lengths l 25 ! i] ! 9 S20 a3 Sequence length a Seq?Seq â® NPI | 1511.06279#26 | Neural Programmer-Interpreters | We propose the neural programmer-interpreter (NPI): a recurrent and
compositional neural network that learns to represent and execute programs. NPI
has three learnable components: a task-agnostic recurrent core, a persistent
key-value program memory, and domain-specific encoders that enable a single NPI
to operate in multiple perceptually diverse environments with distinct
affordances. By learning to compose lower-level programs to express
higher-level programs, NPI reduces sample complexity and increases
generalization ability compared to sequence-to-sequence LSTMs. The program
memory allows efficient learning of additional tasks by building on existing
programs. NPI can also harness the environment (e.g. a scratch pad with
read-write pointers) to cache intermediate results of computation, lessening
the long-term memory burden on recurrent hidden units. In this work we train
the NPI with fully-supervised execution traces; each program has example
sequences of calls to the immediate subprograms conditioned on the input.
Rather than training on a huge number of relatively weak labels, NPI learns
from a small number of rich examples. We demonstrate the capability of our
model to learn several types of compositional programs: addition, sorting, and
canonicalizing 3D models. Furthermore, a single NPI learns to execute these
programs and all 21 associated subprograms. | http://arxiv.org/pdf/1511.06279 | Scott Reed, Nando de Freitas | cs.LG, cs.NE | ICLR 2016 conference submission | null | cs.LG | 20151119 | 20160229 | [
{
"id": "1511.04834"
},
{
"id": "1505.00521"
},
{
"id": "1511.08228"
},
{
"id": "1511.07275"
},
{
"id": "1511.06392"
}
] |
1511.06297 | 26 | # 6 CONCLUSION
This paper presents a method for tackling the problem of conditional computation in deep networks by using reinforcement learning. We propose a type of parameterized conditional computation pol- icy that maps the activations of a layer to a Bernoulli mask. The reinforcement signal accounts for the loss function of the network in its prediction task, while the policy network itself is regularized to account for the desire to have sparse computations. The REINFORCE algorithm is used to train policies to optimize this cost. Our experiments show that it is possible to train such models at the same levels of accuracy as their standard counterparts. Additionally, it seems possible to execute these similarly accurate models faster due to their sparsity. Furthermore, the model has a few simple parameters that allow to control the trade-off between accuracy and running time.
The use of REINFORCE could be replaced by a more efï¬cient policy search algorithm, and also, perhaps, one in which rewards (or costs) as described above are replaced by a more sequential variant. The more direct use of computation time as a cost may prove beneï¬cial. In general, we consider conditional computation to be an area in which reinforcement learning could be very useful, and deserves further study. | 1511.06297#26 | Conditional Computation in Neural Networks for faster models | Deep learning has become the state-of-art tool in many applications, but the
evaluation and training of deep models can be time-consuming and
computationally expensive. The conditional computation approach has been
proposed to tackle this problem (Bengio et al., 2013; Davis & Arel, 2013). It
operates by selectively activating only parts of the network at a time. In this
paper, we use reinforcement learning as a tool to optimize conditional
computation policies. More specifically, we cast the problem of learning
activation-dependent policies for dropping out blocks of units as a
reinforcement learning problem. We propose a learning scheme motivated by
computation speed, capturing the idea of wanting to have parsimonious
activations while maintaining prediction accuracy. We apply a policy gradient
algorithm for learning policies that optimize this loss function and propose a
regularization mechanism that encourages diversification of the dropout policy.
We present encouraging empirical results showing that this approach improves
the speed of computation without impacting the quality of the approximation. | http://arxiv.org/pdf/1511.06297 | Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, Doina Precup | cs.LG | ICLR 2016 submission, revised | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1502.01852"
},
{
"id": "1502.04623"
},
{
"id": "1502.03044"
}
] |
1511.06342 | 26 | 5.2 TRANSFER We have found that although a small AMN can learn how to behave at a close-to-expert level on multiple source tasks, a larger AMN can more easily transfer knowledge to target tasks after be- ing trained on the source tasks. For the transfer experiments, we therefore signiï¬cantly increased the AMN model complexity relative to that of an expert. Using a larger network architecture also allowed us to scale up to playing 13 source games at once (see Appendix D for source task perfor- mance using the larger AMNs). We additionally found that using an AMN trained for too long on the source tasks hurt transfer, as it is likely overï¬tting. Therefore for the transfer experiments, we train the AMN on only 4 million frames for each of the source games.
To evaluate the Actor-Mimic objective on transfer learning, the previously described large AMNs will be used as a weight initialization for DQNs which are each trained on a different target task. We additionally independently evaluate the beneï¬t of the feature regression objective during transfer by having one AMN trained with only the policy regression objective (AMN-policy) and another trained using both feature and policy regression (AMN-feature). The results are then compared to the baseline of a DQN that was initialized with random weights. | 1511.06342#26 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06434 | 26 | Figure 6: Top row: un-modiï¬ed samples from model. Bottom row: the same samples generated with dropping out âwindowâ ï¬lters. Some windows are removed, others are transformed into objects with similar visual appearance such as doors and mirrors. Although visual quality decreased, overall scene composition stayed similar, suggesting the generator has done a good job disentangling scene representation from object representation. Extended experiments could be done to remove other objects from the image and modify the objects the generator draws.
models. Further exploring and developing the above mentioned vector arithmetic could dramat- ically reduce the amount of data needed for conditional generative modeling of complex image distributions.
# 7 CONCLUSION AND FUTURE WORK
We propose a more stable set of architectures for training generative adversarial networks and we give evidence that adversarial networks learn good representations of images for supervised learning and generative modeling. There are still some forms of model instability remaining - we noticed as models are trained longer they sometimes collapse a subset of ï¬lters to a single oscillating mode.
9
# Under review as a conference paper at ICLR 2016
uU __,â__ ââ ~ ; smiling neutral neutral smiling man woman woman man uââ_ } Sy man man woman . with glasses without glasses without glasses woman with glasses Results of doing the same arithmetic in pixel space -fi+=& -fitG@=& | 1511.06434#26 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | In recent years, supervised learning with convolutional networks (CNNs) has
seen huge adoption in computer vision applications. Comparatively, unsupervised
learning with CNNs has received less attention. In this work we hope to help
bridge the gap between the success of CNNs for supervised learning and
unsupervised learning. We introduce a class of CNNs called deep convolutional
generative adversarial networks (DCGANs), that have certain architectural
constraints, and demonstrate that they are a strong candidate for unsupervised
learning. Training on various image datasets, we show convincing evidence that
our deep convolutional adversarial pair learns a hierarchy of representations
from object parts to scenes in both the generator and discriminator.
Additionally, we use the learned features for novel tasks - demonstrating their
applicability as general image representations. | http://arxiv.org/pdf/1511.06434 | Alec Radford, Luke Metz, Soumith Chintala | cs.LG, cs.CV | Under review as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1505.00853"
},
{
"id": "1502.03167"
},
{
"id": "1502.04623"
},
{
"id": "1506.02351"
},
{
"id": "1506.03365"
},
{
"id": "1509.01240"
},
{
"id": "1503.03585"
},
{
"id": "1511.01844"
},
{
"id": "1506.05751"
},
{
"id": "1507.02672"
},
{
"id": "1510.02795"
}
] |
1511.06279 | 27 | Figure 5: Sample complexity. Test accuracy of sequence-to-sequence LSTM versus NPI on length-20 arrays of single-digit numbers. Note that NPI is able to mine and train on subprogram traces from each bubblesort example.
Figure 6: Strong vs. weak generalization. Test accuracy of sequence-to-sequence LSTM ver- sus NPI on varying-length arrays of single-digit numbers. Both models were trained on arrays of single-digit numbers up to length 20.
A strong indicator of whether a neural network has learned a program well is whether it can run the program on inputs of previously-unseen sizes. To evaluate this property, we train both the sequence- to-sequence LSTM and NPI to perform bubblesort on arrays of single-digit numbers from length 2 to length 20. Compared to ï¬xed-length inputs this raises the challenge level during training, but in exchange we can get a more ï¬exible and generalizable sorting program. | 1511.06279#27 | Neural Programmer-Interpreters | We propose the neural programmer-interpreter (NPI): a recurrent and
compositional neural network that learns to represent and execute programs. NPI
has three learnable components: a task-agnostic recurrent core, a persistent
key-value program memory, and domain-specific encoders that enable a single NPI
to operate in multiple perceptually diverse environments with distinct
affordances. By learning to compose lower-level programs to express
higher-level programs, NPI reduces sample complexity and increases
generalization ability compared to sequence-to-sequence LSTMs. The program
memory allows efficient learning of additional tasks by building on existing
programs. NPI can also harness the environment (e.g. a scratch pad with
read-write pointers) to cache intermediate results of computation, lessening
the long-term memory burden on recurrent hidden units. In this work we train
the NPI with fully-supervised execution traces; each program has example
sequences of calls to the immediate subprograms conditioned on the input.
Rather than training on a huge number of relatively weak labels, NPI learns
from a small number of rich examples. We demonstrate the capability of our
model to learn several types of compositional programs: addition, sorting, and
canonicalizing 3D models. Furthermore, a single NPI learns to execute these
programs and all 21 associated subprograms. | http://arxiv.org/pdf/1511.06279 | Scott Reed, Nando de Freitas | cs.LG, cs.NE | ICLR 2016 conference submission | null | cs.LG | 20151119 | 20160229 | [
{
"id": "1511.04834"
},
{
"id": "1505.00521"
},
{
"id": "1511.08228"
},
{
"id": "1511.07275"
},
{
"id": "1511.06392"
}
] |
1511.06297 | 27 | All the running times reported in the Experiments section are for a CPU, running on a single core. The motivation for this is to explore deployment of large neural networks on cheap, low-power, single core CPUs such as phones, while retaining high model capacity and expressiveness. While the results presented here show that our model for conditional computation can achieve speedups in this context, it is worth also investigating adaptation of these sparse computation models in multi- core/GPU architectures; this is the subject of ongoing work.
# ACKNOWLEDGEMENTS
The authors gratefully acknowledge ï¬nancial support for this work by the Samsung Advanced In- stitute of Technology (SAIT), the Natural Sciences and Engineering Research Council of Canada (NSERC) and the Fonds de recherche du Qu´ebec - Nature et Technologies (FQRNT).
# REFERENCES
In Burges, C.J.C., Bottou, L., Welling, M., Ghahramani, Z., and Weinberger, K.Q. (eds.), Ad- vances in Neural Information Processing Systems 26, pp. 3084â3092. Curran Associates, Inc., 2013. URL http://papers.nips.cc/paper/5032-adaptive-dropout-for- training-deep-neural-networks.pdf. | 1511.06297#27 | Conditional Computation in Neural Networks for faster models | Deep learning has become the state-of-art tool in many applications, but the
evaluation and training of deep models can be time-consuming and
computationally expensive. The conditional computation approach has been
proposed to tackle this problem (Bengio et al., 2013; Davis & Arel, 2013). It
operates by selectively activating only parts of the network at a time. In this
paper, we use reinforcement learning as a tool to optimize conditional
computation policies. More specifically, we cast the problem of learning
activation-dependent policies for dropping out blocks of units as a
reinforcement learning problem. We propose a learning scheme motivated by
computation speed, capturing the idea of wanting to have parsimonious
activations while maintaining prediction accuracy. We apply a policy gradient
algorithm for learning policies that optimize this loss function and propose a
regularization mechanism that encourages diversification of the dropout policy.
We present encouraging empirical results showing that this approach improves
the speed of computation without impacting the quality of the approximation. | http://arxiv.org/pdf/1511.06297 | Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, Doina Precup | cs.LG | ICLR 2016 submission, revised | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1502.01852"
},
{
"id": "1502.04623"
},
{
"id": "1502.03044"
}
] |
1511.06342 | 27 | The performance on a set of 7 target games is detailed in Table 2 (learning curves are plotted in Figure 7). We can see that the AMN pretraining provides a deï¬nite increase in learning speed for the 3 games of Breakout, Star Gunner and Video Pinball. The results in Breakout and Video Pinball demonstrate that the policy regression objective alone provides signiï¬cant positive transfer in some target tasks. The reason for this large positive transfer might be due to the source game Pong having very similar mechanics to both Video Pinball and Breakout, where one must use a paddle to prevent a ball from falling off screen. The machinery used to detect the ball in Pong would likely be useful in detecting the ball for these two target tasks, given some ï¬ne-tuning. Additionally, the feature regression objective causes a signiï¬cant speed-up in the game of Star Gunner compared to both the random initialization and the network trained solely with policy regression. Therefore even though the feature regression objective can slightly hurt transfer in some source games, it can provide large
6
Published as a conference paper at ICLR 2016 | 1511.06342#27 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06434 | 27 | Figure 7: Vector arithmetic for visual concepts. For each column, the Z vectors of samples are averaged. Arithmetic was then performed on the mean vectors creating a new vector Y . The center sample on the right hand side is produce by feeding Y as input to the generator. To demonstrate the interpolation capabilities of the generator, uniform noise sampled with scale +-0.25 was added to Y to produce the 8 other samples. Applying arithmetic in the input space (bottom two examples) results in noisy overlap due to misalignment.
Further work is needed to tackle this from of instability. We think that extending this framework
10
# Under review as a conference paper at ICLR 2016 | 1511.06434#27 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | In recent years, supervised learning with convolutional networks (CNNs) has
seen huge adoption in computer vision applications. Comparatively, unsupervised
learning with CNNs has received less attention. In this work we hope to help
bridge the gap between the success of CNNs for supervised learning and
unsupervised learning. We introduce a class of CNNs called deep convolutional
generative adversarial networks (DCGANs), that have certain architectural
constraints, and demonstrate that they are a strong candidate for unsupervised
learning. Training on various image datasets, we show convincing evidence that
our deep convolutional adversarial pair learns a hierarchy of representations
from object parts to scenes in both the generator and discriminator.
Additionally, we use the learned features for novel tasks - demonstrating their
applicability as general image representations. | http://arxiv.org/pdf/1511.06434 | Alec Radford, Luke Metz, Soumith Chintala | cs.LG, cs.CV | Under review as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1505.00853"
},
{
"id": "1502.03167"
},
{
"id": "1502.04623"
},
{
"id": "1506.02351"
},
{
"id": "1506.03365"
},
{
"id": "1509.01240"
},
{
"id": "1503.03585"
},
{
"id": "1511.01844"
},
{
"id": "1506.05751"
},
{
"id": "1507.02672"
},
{
"id": "1510.02795"
}
] |
1511.06279 | 28 | To handle variable-sized inputs, the state representation must have some information about input se- quence length and the number of steps taken so far. For example, the main BUBBLESORT program naturally needs to call its helper function BUBBLE a number of times dependent on the sequence length. We enable this in our model by adding a third pointer that acts as a counter; each time BUB- BLE is called the pointer is advanced by one step. The scratch pad environment also provides a bit indicating whether a pointer is at the start or end of a sequence, equivalent in purpose to end tokens used in a sequence-to-sequence model.
For each length, we provided 64 example bubblesort traces, for a total of 1,216 examples. Then, we evaluated whether the network can learn to sort arrays beyond length 20. We found that the trained model generalizes well, and is capable of sorting arrays up to size 60; see Figure 6. At 60 and beyond, we observed a failure mode in which sweeps of pointers across the array would take the wrong number of steps, suggesting that the limiting performance factor is related to counting. In stark contrast, when provided with the 1,216 examples, the sequence-to-sequence LSTMs fail to generalize beyond arrays of length 25 as shown in Figure 6. | 1511.06279#28 | Neural Programmer-Interpreters | We propose the neural programmer-interpreter (NPI): a recurrent and
compositional neural network that learns to represent and execute programs. NPI
has three learnable components: a task-agnostic recurrent core, a persistent
key-value program memory, and domain-specific encoders that enable a single NPI
to operate in multiple perceptually diverse environments with distinct
affordances. By learning to compose lower-level programs to express
higher-level programs, NPI reduces sample complexity and increases
generalization ability compared to sequence-to-sequence LSTMs. The program
memory allows efficient learning of additional tasks by building on existing
programs. NPI can also harness the environment (e.g. a scratch pad with
read-write pointers) to cache intermediate results of computation, lessening
the long-term memory burden on recurrent hidden units. In this work we train
the NPI with fully-supervised execution traces; each program has example
sequences of calls to the immediate subprograms conditioned on the input.
Rather than training on a huge number of relatively weak labels, NPI learns
from a small number of rich examples. We demonstrate the capability of our
model to learn several types of compositional programs: addition, sorting, and
canonicalizing 3D models. Furthermore, a single NPI learns to execute these
programs and all 21 associated subprograms. | http://arxiv.org/pdf/1511.06279 | Scott Reed, Nando de Freitas | cs.LG, cs.NE | ICLR 2016 conference submission | null | cs.LG | 20151119 | 20160229 | [
{
"id": "1511.04834"
},
{
"id": "1505.00521"
},
{
"id": "1511.08228"
},
{
"id": "1511.07275"
},
{
"id": "1511.06392"
}
] |
1511.06297 | 28 | Bengio, Y., Simard, P., and Frasconi, P. Learning long-term dependencies with gradient descent is difï¬cult. IEEE Transactions on Neural Nets, pp. 157â166, 1994.
Bengio, Yoshua, L´eonard, Nicholas, and Courville, Aaron. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013.
Bergstra, James, Breuleux, Olivier, Bastien, Fr´ed´eric, Lamblin, Pascal, Pascanu, Razvan, Des- jardins, Guillaume, Turian, Joseph, Warde-Farley, David, and Bengio, Yoshua. Theano: a CPU
8
# Under review as a conference paper at ICLR 2016
and GPU math expression compiler. In Proceedings of the Python for Scientiï¬c Computing Con- ference (SciPy), June 2010. Oral Presentation.
Bishop, Christopher M. Pattern Recognition and Machine Learning (Information Science and Statis- tics). Springer-Verlag New York, Inc., Secaucus, NJ, USA, 2006. ISBN 0387310738. | 1511.06297#28 | Conditional Computation in Neural Networks for faster models | Deep learning has become the state-of-art tool in many applications, but the
evaluation and training of deep models can be time-consuming and
computationally expensive. The conditional computation approach has been
proposed to tackle this problem (Bengio et al., 2013; Davis & Arel, 2013). It
operates by selectively activating only parts of the network at a time. In this
paper, we use reinforcement learning as a tool to optimize conditional
computation policies. More specifically, we cast the problem of learning
activation-dependent policies for dropping out blocks of units as a
reinforcement learning problem. We propose a learning scheme motivated by
computation speed, capturing the idea of wanting to have parsimonious
activations while maintaining prediction accuracy. We apply a policy gradient
algorithm for learning policies that optimize this loss function and propose a
regularization mechanism that encourages diversification of the dropout policy.
We present encouraging empirical results showing that this approach improves
the speed of computation without impacting the quality of the approximation. | http://arxiv.org/pdf/1511.06297 | Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, Doina Precup | cs.LG | ICLR 2016 submission, revised | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1502.01852"
},
{
"id": "1502.04623"
},
{
"id": "1502.03044"
}
] |
1511.06342 | 28 | Breakout 1mil | 2mil | 3 mil 4 mil 5 mil 6 mil 7 mil 8mil | 9mil | 10 mil Random 1.182 | 5.278 | 29.13 102.3 202.8 212.8 252.9 211.8 | 243.5 258.7 AMN-policy | 18.35 | 102.1 | 216.0 | 271.1 | 308.6 | 286.3 284.6 | 318.8 | 281.6 | 311.3 AMN-feature | 16.23 | 119.0 | 153.7 191.8 172.6 233.9 248.5 178.8 | 235.6 225.5 Gopher Tmil | 2mil | 3 mil 4 mil 5 mil 6 mil 7 mil 8mil | 8mil | 10 mil Random 294.0 | 578.9 1360 1540 1820 1133 633.0 1306 1758 1539 AMN-policy | 715.0 | 612.7 | 1362 924.8 1029 1186 1081 936.7 1251 1142 AMN-feature | 636.2 | 1110 | 918.8 1073 1028 810.1 1008 868.8 1054 982.4 Krull Tmil |] 2mil | 3 mil 4 mil 5 mil 6 mil 7 mil 8mil | 9 mil 10 mil Random 4302 | 6193 6576 | 1511.06342#28 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06434 | 28 | Figure 8: A âturnâ vector was created from four averaged samples of faces looking left vs looking right. By adding interpolations along this axis to random samples we were able to reliably transform their pose.
to other domains such as video (for frame prediction) and audio (pre-trained features for speech synthesis) should be very interesting. Further investigations into the properties of the learnt latent space would be interesting as well.
# ACKNOWLEDGMENTS
We are fortunate and thankful for all the advice and guidance we have received during this work, especially that of Ian Goodfellow, Tobias Springenberg, Arthur Szlam and Durk Kingma. Addition- ally weâd like to thank all of the folks at indico for providing support, resources, and conversations, especially the two other members of the indico research team, Dan Kuster and Nathan Lintz. Finally, weâd like to thank Nvidia for donating a Titan-X GPU used in this work.
# REFERENCES
Bergstra, James and Bengio, Yoshua. Random search for hyper-parameter optimization. JMLR, 2012.
Coates, Adam and Ng, Andrew. Selecting receptive ï¬elds in deep networks. NIPS, 2011. | 1511.06434#28 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | In recent years, supervised learning with convolutional networks (CNNs) has
seen huge adoption in computer vision applications. Comparatively, unsupervised
learning with CNNs has received less attention. In this work we hope to help
bridge the gap between the success of CNNs for supervised learning and
unsupervised learning. We introduce a class of CNNs called deep convolutional
generative adversarial networks (DCGANs), that have certain architectural
constraints, and demonstrate that they are a strong candidate for unsupervised
learning. Training on various image datasets, we show convincing evidence that
our deep convolutional adversarial pair learns a hierarchy of representations
from object parts to scenes in both the generator and discriminator.
Additionally, we use the learned features for novel tasks - demonstrating their
applicability as general image representations. | http://arxiv.org/pdf/1511.06434 | Alec Radford, Luke Metz, Soumith Chintala | cs.LG, cs.CV | Under review as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1505.00853"
},
{
"id": "1502.03167"
},
{
"id": "1502.04623"
},
{
"id": "1506.02351"
},
{
"id": "1506.03365"
},
{
"id": "1509.01240"
},
{
"id": "1503.03585"
},
{
"id": "1511.01844"
},
{
"id": "1506.05751"
},
{
"id": "1507.02672"
},
{
"id": "1510.02795"
}
] |
1511.06279 | 29 | To study sample complexity further, we ï¬x the length of the arrays to 20 and vary the number of training examples. We see in Figure 5 that NPI starts learning with 2 examples and is able to sort almost perfectly with only 8 examples. The sequence-to-sequence model on the other hand requires 64 examples to start learning and only manages to sort well with over 250 examples.
Figure 7 shows several example canonicalization trajectories generated by our model, starting from the leftmost car. The image encoder was a convolutional network with three passes of stride-2 convolution and pooling, trained on renderings of size 128 à 128. The canonical target pose in this case is frontal with 15⦠elevation. At test time, from an initial rendering, NPI is able to canonicalize cars of varying appearance from multiple starting positions. Importantly, it can generalize to car appearances not encountered in the training set as shown in Figure 7.
# 4.3 LEARNING NEW PROGRAMS WITH A FIXED CORE | 1511.06279#29 | Neural Programmer-Interpreters | We propose the neural programmer-interpreter (NPI): a recurrent and
compositional neural network that learns to represent and execute programs. NPI
has three learnable components: a task-agnostic recurrent core, a persistent
key-value program memory, and domain-specific encoders that enable a single NPI
to operate in multiple perceptually diverse environments with distinct
affordances. By learning to compose lower-level programs to express
higher-level programs, NPI reduces sample complexity and increases
generalization ability compared to sequence-to-sequence LSTMs. The program
memory allows efficient learning of additional tasks by building on existing
programs. NPI can also harness the environment (e.g. a scratch pad with
read-write pointers) to cache intermediate results of computation, lessening
the long-term memory burden on recurrent hidden units. In this work we train
the NPI with fully-supervised execution traces; each program has example
sequences of calls to the immediate subprograms conditioned on the input.
Rather than training on a huge number of relatively weak labels, NPI learns
from a small number of rich examples. We demonstrate the capability of our
model to learn several types of compositional programs: addition, sorting, and
canonicalizing 3D models. Furthermore, a single NPI learns to execute these
programs and all 21 associated subprograms. | http://arxiv.org/pdf/1511.06279 | Scott Reed, Nando de Freitas | cs.LG, cs.NE | ICLR 2016 conference submission | null | cs.LG | 20151119 | 20160229 | [
{
"id": "1511.04834"
},
{
"id": "1505.00521"
},
{
"id": "1511.08228"
},
{
"id": "1511.07275"
},
{
"id": "1511.06392"
}
] |
1511.06297 | 29 | Davis, Andrew and Arel, Itamar. Low-rank approximations for conditional feedforward computation in deep neural networks. arXiv preprint arXiv:1312.4461, 2013.
Deisenroth, Marc Peter, Neumann, Gerhard, and Peters, Jan. A survey on policy search for robotics. Foundations and Trends in Robotics, 2(1-2):1â142, 2013. doi: 10.1561/2300000021. URL http://dx.doi.org/10.1561/2300000021.
Denoyer, Ludovic and Gallinari, Patrick. Deep sequential neural network. CoRR, abs/1410.0510, 2014. URL http://arxiv.org/abs/1410.0510.
Glorot, Xavier and Bengio, Yoshua. Understanding the difï¬culty of training deep feedforward neural networks. In Proceedings of the Thirteenth International Conference on Artiï¬cial Intelligence and Statistics, AISTATS 2010, Chia Laguna Resort, Sardinia, Italy, May 13-15, 2010, pp. 249â256, 2010. URL http://www.jmlr.org/proceedings/papers/v9/glorot10a.html. | 1511.06297#29 | Conditional Computation in Neural Networks for faster models | Deep learning has become the state-of-art tool in many applications, but the
evaluation and training of deep models can be time-consuming and
computationally expensive. The conditional computation approach has been
proposed to tackle this problem (Bengio et al., 2013; Davis & Arel, 2013). It
operates by selectively activating only parts of the network at a time. In this
paper, we use reinforcement learning as a tool to optimize conditional
computation policies. More specifically, we cast the problem of learning
activation-dependent policies for dropping out blocks of units as a
reinforcement learning problem. We propose a learning scheme motivated by
computation speed, capturing the idea of wanting to have parsimonious
activations while maintaining prediction accuracy. We apply a policy gradient
algorithm for learning policies that optimize this loss function and propose a
regularization mechanism that encourages diversification of the dropout policy.
We present encouraging empirical results showing that this approach improves
the speed of computation without impacting the quality of the approximation. | http://arxiv.org/pdf/1511.06297 | Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, Doina Precup | cs.LG | ICLR 2016 submission, revised | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1502.01852"
},
{
"id": "1502.04623"
},
{
"id": "1502.03044"
}
] |
1511.06342 | 29 | 982.4 Krull Tmil |] 2mil | 3 mil 4 mil 5 mil 6 mil 7 mil 8mil | 9 mil 10 mil Random 4302 | 6193 6576 7030 6754 5294 5949 5557 5366 6005 AMN-policy | 5827 | 7279 6838 6971 7277 7129 7854 8012 7244 7835 AMN-feature | 5033 | 7256 7008 7582 7665 8016 8133 6536 7832 6923 Road Runner | 1 mil | 2 mil | 3 mil 4 mil 5 mil 6 mil 7 mil 8mil | 9mil | 10 mil Random 327.5 | 988.1 | 16263 | 27183 | 26639 | 29488 33197 | 27683 | 25235 | 31647 AMN-policy | 1561 | 5119 | 19483 | 22132 | 23391 | 23813 34673 | 33476 | 31967 | 31416 AMN-feature | 1349 | 6659 | 18074 | 16858 | 18099 | 22985 27023 | 24149 | 28225 | 23342 Robotank Tmil | 2mil | 3 mil 4 mil 5 mil 6 mil 7 mil 8mil | 9mil | 10 mil Random 4.830 | 6.965 | 9.825 | 13.22 | 21.07 | 22.54 31.94 | 29.80 | 37.12 | 34.04 | 1511.06342#29 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.