doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1511.06434 | 29 | Coates, Adam and Ng, Andrew. Selecting receptive ï¬elds in deep networks. NIPS, 2011.
Coates, Adam and Ng, Andrew Y. Learning feature representations with k-means. In Neural Net- works: Tricks of the Trade, pp. 561â580. Springer, 2012.
Deng, Jia, Dong, Wei, Socher, Richard, Li, Li-Jia, Li, Kai, and Fei-Fei, Li. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pp. 248â255. IEEE, 2009.
Denton, Emily, Chintala, Soumith, Szlam, Arthur, and Fergus, Rob. Deep generative image models using a laplacian pyramid of adversarial networks. arXiv preprint arXiv:1506.05751, 2015.
Dosovitskiy, Alexey, Springenberg, Jost Tobias, and Brox, Thomas. Learning to generate chairs with convolutional neural networks. arXiv preprint arXiv:1411.5928, 2014.
11
# Under review as a conference paper at ICLR 2016 | 1511.06434#29 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | In recent years, supervised learning with convolutional networks (CNNs) has
seen huge adoption in computer vision applications. Comparatively, unsupervised
learning with CNNs has received less attention. In this work we hope to help
bridge the gap between the success of CNNs for supervised learning and
unsupervised learning. We introduce a class of CNNs called deep convolutional
generative adversarial networks (DCGANs), that have certain architectural
constraints, and demonstrate that they are a strong candidate for unsupervised
learning. Training on various image datasets, we show convincing evidence that
our deep convolutional adversarial pair learns a hierarchy of representations
from object parts to scenes in both the generator and discriminator.
Additionally, we use the learned features for novel tasks - demonstrating their
applicability as general image representations. | http://arxiv.org/pdf/1511.06434 | Alec Radford, Luke Metz, Soumith Chintala | cs.LG, cs.CV | Under review as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1505.00853"
},
{
"id": "1502.03167"
},
{
"id": "1502.04623"
},
{
"id": "1506.02351"
},
{
"id": "1506.03365"
},
{
"id": "1509.01240"
},
{
"id": "1503.03585"
},
{
"id": "1511.01844"
},
{
"id": "1506.05751"
},
{
"id": "1507.02672"
},
{
"id": "1510.02795"
}
] |
1511.06279 | 30 | # 4.3 LEARNING NEW PROGRAMS WITH A FIXED CORE
One challenge for continual learning of neural-network-based agents is that training on new tasks and experiences can lead to degraded performance in old tasks. The learning of new tasks may require that the network weights change substantially, so care must be taken to avoid catastrophic forgetting (Mccloskey & Cohen, 1989; OReilly et al., 2014). Using NPI, one solution is to ï¬x the weights of the core routing module, and only make sparse updates to the program memory.
When adding a new program the core moduleâs routing computation will be completely unaffected; all the learning for a new task occurs in program embedding space. Of course, the addition of new programs to the memory adds a new choice of program at each time step, and an old program could
8
Published as a conference paper at ICLR 2016 | 1511.06279#30 | Neural Programmer-Interpreters | We propose the neural programmer-interpreter (NPI): a recurrent and
compositional neural network that learns to represent and execute programs. NPI
has three learnable components: a task-agnostic recurrent core, a persistent
key-value program memory, and domain-specific encoders that enable a single NPI
to operate in multiple perceptually diverse environments with distinct
affordances. By learning to compose lower-level programs to express
higher-level programs, NPI reduces sample complexity and increases
generalization ability compared to sequence-to-sequence LSTMs. The program
memory allows efficient learning of additional tasks by building on existing
programs. NPI can also harness the environment (e.g. a scratch pad with
read-write pointers) to cache intermediate results of computation, lessening
the long-term memory burden on recurrent hidden units. In this work we train
the NPI with fully-supervised execution traces; each program has example
sequences of calls to the immediate subprograms conditioned on the input.
Rather than training on a huge number of relatively weak labels, NPI learns
from a small number of rich examples. We demonstrate the capability of our
model to learn several types of compositional programs: addition, sorting, and
canonicalizing 3D models. Furthermore, a single NPI learns to execute these
programs and all 21 associated subprograms. | http://arxiv.org/pdf/1511.06279 | Scott Reed, Nando de Freitas | cs.LG, cs.NE | ICLR 2016 conference submission | null | cs.LG | 20151119 | 20160229 | [
{
"id": "1511.04834"
},
{
"id": "1505.00521"
},
{
"id": "1511.08228"
},
{
"id": "1511.07275"
},
{
"id": "1511.06392"
}
] |
1511.06297 | 30 | Gregor, Karol, Danihelka, Ivo, Graves, Alex, and Wierstra, Daan. Draw: A recurrent neural network for image generation. arXiv preprint arXiv:1502.04623, 2015.
He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Delving deep into rectiï¬ers: Sur- passing human-level performance on imagenet classiï¬cation. arXiv preprint arXiv:1502.01852, 2015.
Hinton, Geoffrey E., Srivastava, Nitish, Krizhevsky, Alex, Sutskever, Ilya, and Salakhutdinov, Improving neural networks by preventing co-adaptation of feature detectors. CoRR, Ruslan. abs/1207.0580, 2012. URL http://arxiv.org/abs/1207.0580.
Hochreiter, S. Untersuchungen zu dynamischen neuronalen Netzen. Diploma thesis, T.U. M¨unich, 1991.
Krizhevsky, Alex and Hinton, Geoffrey. Learning multiple layers of features from tiny images, 2009. | 1511.06297#30 | Conditional Computation in Neural Networks for faster models | Deep learning has become the state-of-art tool in many applications, but the
evaluation and training of deep models can be time-consuming and
computationally expensive. The conditional computation approach has been
proposed to tackle this problem (Bengio et al., 2013; Davis & Arel, 2013). It
operates by selectively activating only parts of the network at a time. In this
paper, we use reinforcement learning as a tool to optimize conditional
computation policies. More specifically, we cast the problem of learning
activation-dependent policies for dropping out blocks of units as a
reinforcement learning problem. We propose a learning scheme motivated by
computation speed, capturing the idea of wanting to have parsimonious
activations while maintaining prediction accuracy. We apply a policy gradient
algorithm for learning policies that optimize this loss function and propose a
regularization mechanism that encourages diversification of the dropout policy.
We present encouraging empirical results showing that this approach improves
the speed of computation without impacting the quality of the approximation. | http://arxiv.org/pdf/1511.06297 | Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, Doina Precup | cs.LG | ICLR 2016 submission, revised | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1502.01852"
},
{
"id": "1502.04623"
},
{
"id": "1502.03044"
}
] |
1511.06342 | 30 | Random 4.830 | 6.965 | 9.825 | 13.22 | 21.07 | 22.54 31.94 | 29.80 | 37.12 | 34.04 AMN-policy | 3.502 | 4.522 | 11.03 9.215 16.89 17.31 18.66 20.58 | 23.58 23.02 AMN-feature | 3.550 | 6.162 | 13.94 17.58 17.57 20.72 20.13 21.13 | 26.14 23.29 Star Gunner | I mil | 2mil | 3 mil 4 mil 5 mil 6 mil 7 mil 8mil | 9mil | 10 mil Random 221.2 | 468.5 | 927.6 1084 1508 1626 3286 16017 | 36273 | 45322 AMN-policy | 274.3 | 302.0 | 978.4 1667 4000 14655 31588 | 45667 | 38738 | 53642 AMN-feature | 1405 | 4570 | 18111 | 23406 | 36070 | 46811 | 50667 | 49579 | 50440 | 56839 Video Pinball | I mil |] 2 mil ] 3 mil 4 mil 5 mil 6 mil 7 mil 8mil | 9 mil 10 mil Random 2323 | 8549 6780 5842 10383 | 11093 8468 5476 9964 11893 AMN-policy | 2583 | | 1511.06342#30 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06434 | 30 | 11
# Under review as a conference paper at ICLR 2016
Dosovitskiy, Alexey, Fischer, Philipp, Springenberg, Jost Tobias, Riedmiller, Martin, and Brox, Thomas. Discriminative unsupervised feature learning with exemplar convolutional neural net- works. In Pattern Analysis and Machine Intelligence, IEEE Transactions on, volume 99. IEEE, 2015.
Efros, Alexei, Leung, Thomas K, et al. Texture synthesis by non-parametric sampling. In Computer Vision, 1999. The Proceedings of the Seventh IEEE International Conference on, volume 2, pp. 1033â1038. IEEE, 1999.
Freeman, William T, Jones, Thouis R, and Pasztor, Egon C. Example-based super-resolution. Com- puter Graphics and Applications, IEEE, 22(2):56â65, 2002.
Goodfellow, Ian J, Warde-Farley, David, Mirza, Mehdi, Courville, Aaron, and Bengio, Yoshua. Maxout networks. arXiv preprint arXiv:1302.4389, 2013. | 1511.06434#30 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | In recent years, supervised learning with convolutional networks (CNNs) has
seen huge adoption in computer vision applications. Comparatively, unsupervised
learning with CNNs has received less attention. In this work we hope to help
bridge the gap between the success of CNNs for supervised learning and
unsupervised learning. We introduce a class of CNNs called deep convolutional
generative adversarial networks (DCGANs), that have certain architectural
constraints, and demonstrate that they are a strong candidate for unsupervised
learning. Training on various image datasets, we show convincing evidence that
our deep convolutional adversarial pair learns a hierarchy of representations
from object parts to scenes in both the generator and discriminator.
Additionally, we use the learned features for novel tasks - demonstrating their
applicability as general image representations. | http://arxiv.org/pdf/1511.06434 | Alec Radford, Luke Metz, Soumith Chintala | cs.LG, cs.CV | Under review as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1505.00853"
},
{
"id": "1502.03167"
},
{
"id": "1502.04623"
},
{
"id": "1506.02351"
},
{
"id": "1506.03365"
},
{
"id": "1509.01240"
},
{
"id": "1503.03585"
},
{
"id": "1511.01844"
},
{
"id": "1506.05751"
},
{
"id": "1507.02672"
},
{
"id": "1510.02795"
}
] |
1511.06279 | 31 | 8
Published as a conference paper at ICLR 2016
GOTO 12 Co 22 1 2 3 HGOTO BGOTO) LGoTo aia aria â Gee RGOTO ' 2 3 ACT (LEFT) ASKGRERW) = a & ACT (LEFT) 4 5 6 atecrd ACT (LEFT) ACT (UP) ACT (LEFT) fips Sih r_.) ACT(LEFT) 7 GOTO 1 2 1 2 3 VGOTO HGOTO UGOTO pe RGOTO Se, a @ ACT (UP) ACT (RIGHT) ACT (RIGHT) GOTO 1 2 ACT (RIGHT) HGOTO 1 A ; VvGOTO 4 5 6 LGOTO _ _ __ DGOTO ACT(LEFT) te? ama ea ACT (DOWN) 2 VGOTO hs t=3 = ACT (DOWN) 2607 oun)
Figure 7: Example canonicalization of several different test set cars. The network is able to generate and execute the appropriate plan based on the starting car image. This NPI was trained on trajectories starting at azimuth (â75â¦...75â¦) , elevation (0â¦...60â¦) in 15⦠increments. The training trajectories target azimuth 0⦠and elevation 15â¦, as in the generated traces above. | 1511.06279#31 | Neural Programmer-Interpreters | We propose the neural programmer-interpreter (NPI): a recurrent and
compositional neural network that learns to represent and execute programs. NPI
has three learnable components: a task-agnostic recurrent core, a persistent
key-value program memory, and domain-specific encoders that enable a single NPI
to operate in multiple perceptually diverse environments with distinct
affordances. By learning to compose lower-level programs to express
higher-level programs, NPI reduces sample complexity and increases
generalization ability compared to sequence-to-sequence LSTMs. The program
memory allows efficient learning of additional tasks by building on existing
programs. NPI can also harness the environment (e.g. a scratch pad with
read-write pointers) to cache intermediate results of computation, lessening
the long-term memory burden on recurrent hidden units. In this work we train
the NPI with fully-supervised execution traces; each program has example
sequences of calls to the immediate subprograms conditioned on the input.
Rather than training on a huge number of relatively weak labels, NPI learns
from a small number of rich examples. We demonstrate the capability of our
model to learn several types of compositional programs: addition, sorting, and
canonicalizing 3D models. Furthermore, a single NPI learns to execute these
programs and all 21 associated subprograms. | http://arxiv.org/pdf/1511.06279 | Scott Reed, Nando de Freitas | cs.LG, cs.NE | ICLR 2016 conference submission | null | cs.LG | 20151119 | 20160229 | [
{
"id": "1511.04834"
},
{
"id": "1505.00521"
},
{
"id": "1511.08228"
},
{
"id": "1511.07275"
},
{
"id": "1511.06392"
}
] |
1511.06297 | 31 | Krizhevsky, Alex and Hinton, Geoffrey. Learning multiple layers of features from tiny images, 2009.
Martens, James. Deep learning via hessian-free optimization. In Proceedings of the 27th Interna- tional Conference on Machine Learning (ICML-10), June 21-24, 2010, Haifa, Israel, pp. 735â 742, 2010. URL http://www.icml2010.org/papers/458.pdf.
Mnih, Volodymyr, Heess, Nicolas, Graves, Alex, and kavukcuoglu, koray. Recurrent models of vi- sual attention. In Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N.D., and Weinberger, K.Q. (eds.), Advances in Neural Information Processing Systems 27, pp. 2204â2212. Curran Asso- ciates, Inc., 2014. URL http://papers.nips.cc/paper/5542-recurrent-models- of-visual-attention.pdf. | 1511.06297#31 | Conditional Computation in Neural Networks for faster models | Deep learning has become the state-of-art tool in many applications, but the
evaluation and training of deep models can be time-consuming and
computationally expensive. The conditional computation approach has been
proposed to tackle this problem (Bengio et al., 2013; Davis & Arel, 2013). It
operates by selectively activating only parts of the network at a time. In this
paper, we use reinforcement learning as a tool to optimize conditional
computation policies. More specifically, we cast the problem of learning
activation-dependent policies for dropping out blocks of units as a
reinforcement learning problem. We propose a learning scheme motivated by
computation speed, capturing the idea of wanting to have parsimonious
activations while maintaining prediction accuracy. We apply a policy gradient
algorithm for learning policies that optimize this loss function and propose a
regularization mechanism that encourages diversification of the dropout policy.
We present encouraging empirical results showing that this approach improves
the speed of computation without impacting the quality of the approximation. | http://arxiv.org/pdf/1511.06297 | Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, Doina Precup | cs.LG | ICLR 2016 submission, revised | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1502.01852"
},
{
"id": "1502.04623"
},
{
"id": "1502.03044"
}
] |
1511.06434 | 31 | Goodfellow, Ian J., Pouget-Abadie, Jean, Mirza, Mehdi, Xu, Bing, Warde-Farley, David, Ozair, Sherjil, Courville, Aaron C., and Bengio, Yoshua. Generative adversarial nets. NIPS, 2014.
Gregor, Karol, Danihelka, Ivo, Graves, Alex, and Wierstra, Daan. Draw: A recurrent neural network for image generation. arXiv preprint arXiv:1502.04623, 2015.
Hardt, Moritz, Recht, Benjamin, and Singer, Yoram. Train faster, generalize better: Stability of stochastic gradient descent. arXiv preprint arXiv:1509.01240, 2015.
Hauberg, Sren, Freifeld, Oren, Larsen, Anders Boesen Lindbo, Fisher III, John W., and Hansen, Lars Kair. Dreaming more data: Class-dependent distributions over diffeomorphisms for learned data augmentation. arXiv preprint arXiv:1510.02795, 2015.
Hays, James and Efros, Alexei A. Scene completion using millions of photographs. ACM Transac- tions on Graphics (TOG), 26(3):4, 2007. | 1511.06434#31 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | In recent years, supervised learning with convolutional networks (CNNs) has
seen huge adoption in computer vision applications. Comparatively, unsupervised
learning with CNNs has received less attention. In this work we hope to help
bridge the gap between the success of CNNs for supervised learning and
unsupervised learning. We introduce a class of CNNs called deep convolutional
generative adversarial networks (DCGANs), that have certain architectural
constraints, and demonstrate that they are a strong candidate for unsupervised
learning. Training on various image datasets, we show convincing evidence that
our deep convolutional adversarial pair learns a hierarchy of representations
from object parts to scenes in both the generator and discriminator.
Additionally, we use the learned features for novel tasks - demonstrating their
applicability as general image representations. | http://arxiv.org/pdf/1511.06434 | Alec Radford, Luke Metz, Soumith Chintala | cs.LG, cs.CV | Under review as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1505.00853"
},
{
"id": "1502.03167"
},
{
"id": "1502.04623"
},
{
"id": "1506.02351"
},
{
"id": "1506.03365"
},
{
"id": "1509.01240"
},
{
"id": "1503.03585"
},
{
"id": "1511.01844"
},
{
"id": "1506.05751"
},
{
"id": "1507.02672"
},
{
"id": "1510.02795"
}
] |
1511.06279 | 32 | mistakenly call a newly added program. To overcome this, when learning a new set of program vectors with a ï¬xed core, in practice we train not only on example traces of the new program, but also traces of existing programs. Alternatively, a simpler approach is to prevent existing programs from calling subsequently added programs, allowing addition of new programs without ever looking back at training data for known programs. In either case, note that only the memory slots of the new programs are updated, and all other weights, including other program embeddings, are ï¬xed.
Table 1 shows the result of adding a maximum-ï¬nding program MAX to a multitask NPI trained on addition, sorting and canonicalization. MAX ï¬rst calls BUBBLESORT and then a new program RJMP, which moves pointers to the right of the sorted array, where the max element can be read. During training we froze all weights except for the two newly-added program embeddings. We ï¬nd that NPI learns MAX perfectly without forgetting the other tasks. In particular, after training a single multi-task model as outlined in the following section, learning the MAX program with this ï¬xed-core multi-task NPI results in no performance deterioration for all three tasks.
4.4 SOLVING MULTIPLE TASKS WITH A SINGLE NETWORK | 1511.06279#32 | Neural Programmer-Interpreters | We propose the neural programmer-interpreter (NPI): a recurrent and
compositional neural network that learns to represent and execute programs. NPI
has three learnable components: a task-agnostic recurrent core, a persistent
key-value program memory, and domain-specific encoders that enable a single NPI
to operate in multiple perceptually diverse environments with distinct
affordances. By learning to compose lower-level programs to express
higher-level programs, NPI reduces sample complexity and increases
generalization ability compared to sequence-to-sequence LSTMs. The program
memory allows efficient learning of additional tasks by building on existing
programs. NPI can also harness the environment (e.g. a scratch pad with
read-write pointers) to cache intermediate results of computation, lessening
the long-term memory burden on recurrent hidden units. In this work we train
the NPI with fully-supervised execution traces; each program has example
sequences of calls to the immediate subprograms conditioned on the input.
Rather than training on a huge number of relatively weak labels, NPI learns
from a small number of rich examples. We demonstrate the capability of our
model to learn several types of compositional programs: addition, sorting, and
canonicalizing 3D models. Furthermore, a single NPI learns to execute these
programs and all 21 associated subprograms. | http://arxiv.org/pdf/1511.06279 | Scott Reed, Nando de Freitas | cs.LG, cs.NE | ICLR 2016 conference submission | null | cs.LG | 20151119 | 20160229 | [
{
"id": "1511.04834"
},
{
"id": "1505.00521"
},
{
"id": "1511.08228"
},
{
"id": "1511.07275"
},
{
"id": "1511.06392"
}
] |
1511.06297 | 32 | Netzer, Yuval, Wang, Tao, Coates, Adam, Bissacco, Alessandro, Wu, Bo, and Ng, Andrew Y. Read- ing digits in natural images with unsupervised feature learning. In NIPS workshop on deep learn- ing and unsupervised feature learning, volume 2011, pp. 5. Granada, Spain, 2011.
Pearlmutter, Barak A. Fast exact multiplication by the hessian. Neural Comput., 6(1):147â doi: 10.1162/neco.1994.6.1.147. URL http: 160, January 1994. //dx.doi.org/10.1162/neco.1994.6.1.147. ISSN 0899-7667.
Puterman, Martin L. Markov Decision Processes: Discrete Stochastic Dynamic Programming. John Wiley & Sons, Inc., New York, NY, USA, 1st edition, 1994. ISBN 0471619779.
Rumelhart, David E, Hinton, Geoffrey E, and Williams, Ronald J. Learning representations by back-propagating errors. Cognitive modeling, 5, 1988. | 1511.06297#32 | Conditional Computation in Neural Networks for faster models | Deep learning has become the state-of-art tool in many applications, but the
evaluation and training of deep models can be time-consuming and
computationally expensive. The conditional computation approach has been
proposed to tackle this problem (Bengio et al., 2013; Davis & Arel, 2013). It
operates by selectively activating only parts of the network at a time. In this
paper, we use reinforcement learning as a tool to optimize conditional
computation policies. More specifically, we cast the problem of learning
activation-dependent policies for dropping out blocks of units as a
reinforcement learning problem. We propose a learning scheme motivated by
computation speed, capturing the idea of wanting to have parsimonious
activations while maintaining prediction accuracy. We apply a policy gradient
algorithm for learning policies that optimize this loss function and propose a
regularization mechanism that encourages diversification of the dropout policy.
We present encouraging empirical results showing that this approach improves
the speed of computation without impacting the quality of the approximation. | http://arxiv.org/pdf/1511.06297 | Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, Doina Precup | cs.LG | ICLR 2016 submission, revised | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1502.01852"
},
{
"id": "1502.04623"
},
{
"id": "1502.03044"
}
] |
1511.06342 | 32 | Table 2: Actor-Mimic transfer results for a set of 7 games. The 3 networks are trained as DQNs on the target task, with the only difference being the weight initialization. âRandomâ means random initial weights, âAMN- policyâ means a weight initialization with an AMN trained using policy regression and âAMN-featureâ means a weight initialization with an AMN trained using both policy and feature regression (see text for more details). We report the average test reward every 4 training epochs (equivalent to 1 million training frames), where the average is over 4 testing epochs that are evaluated immediately after each training epoch. For each game, we bold out the network results that have the highest average testing reward for that particular column.
beneï¬ts in others. The positive transfer in Breakout, Star Gunner and Video Pinball saves at least up to 5 million frames of training time in each game. Processing 5 million frames with the large model is equivalent to around 4 days of compute time on a NVIDIA GTX Titan. | 1511.06342#32 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06434 | 32 | Ioffe, Sergey and Szegedy, Christian. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
Kingma, Diederik P and Ba, Jimmy Lei. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Kingma, Diederik P and Welling, Max. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
Lee, Honglak, Grosse, Roger, Ranganath, Rajesh, and Ng, Andrew Y. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In Proceedings of the 26th Annual International Conference on Machine Learning, pp. 609â616. ACM, 2009.
Loosli, Ga¨elle, Canu, St´ephane, and Bottou, L´eon. Training invariant support vector machines using In Bottou, L´eon, Chapelle, Olivier, DeCoste, Dennis, and Weston, Jason selective sampling. (eds.), Large Scale Kernel Machines, pp. 301â320. MIT Press, Cambridge, MA., 2007. URL http://leon.bottou.org/papers/loosli-canu-bottou-2006. | 1511.06434#32 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | In recent years, supervised learning with convolutional networks (CNNs) has
seen huge adoption in computer vision applications. Comparatively, unsupervised
learning with CNNs has received less attention. In this work we hope to help
bridge the gap between the success of CNNs for supervised learning and
unsupervised learning. We introduce a class of CNNs called deep convolutional
generative adversarial networks (DCGANs), that have certain architectural
constraints, and demonstrate that they are a strong candidate for unsupervised
learning. Training on various image datasets, we show convincing evidence that
our deep convolutional adversarial pair learns a hierarchy of representations
from object parts to scenes in both the generator and discriminator.
Additionally, we use the learned features for novel tasks - demonstrating their
applicability as general image representations. | http://arxiv.org/pdf/1511.06434 | Alec Radford, Luke Metz, Soumith Chintala | cs.LG, cs.CV | Under review as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1505.00853"
},
{
"id": "1502.03167"
},
{
"id": "1502.04623"
},
{
"id": "1506.02351"
},
{
"id": "1506.03365"
},
{
"id": "1509.01240"
},
{
"id": "1503.03585"
},
{
"id": "1511.01844"
},
{
"id": "1506.05751"
},
{
"id": "1507.02672"
},
{
"id": "1510.02795"
}
] |
1511.06279 | 33 | 4.4 SOLVING MULTIPLE TASKS WITH A SINGLE NETWORK
In this section we perform a controlled experiment to compare the performance of a multi-task NPI with several single-task NPI models. Table 1 shows the results for addition, sorting and canonical- izing 3D car models. We trained and evaluated on 10-digit numbers for addition, length-5 arrays for sorting, and up to four-step trajectories for canonicalization. As shown in Table 1, one multi-task NPI can learn all three programs (and necessarily the 21 subprograms) with comparable accuracy compared to each single-task NPI.
Task Addition Sorting Canon. seen car Canon. unseen Maximum Single Multi 97.0 100.0 100.0 100.0 91.4 89.5 89.9 88.7 - - + Max 97.0 100.0 91.4 89.9 100.0 Table 1: Per-sequence % accuracy. â+ Maxâ indicates performance after addition of the ad- ditional max-ï¬nding subprograms to memory. âunseenâ uses a test set with disjoint car mod- els from the training set, while âseen carâ uses the same car models but different trajectories.
# 5 CONCLUSION | 1511.06279#33 | Neural Programmer-Interpreters | We propose the neural programmer-interpreter (NPI): a recurrent and
compositional neural network that learns to represent and execute programs. NPI
has three learnable components: a task-agnostic recurrent core, a persistent
key-value program memory, and domain-specific encoders that enable a single NPI
to operate in multiple perceptually diverse environments with distinct
affordances. By learning to compose lower-level programs to express
higher-level programs, NPI reduces sample complexity and increases
generalization ability compared to sequence-to-sequence LSTMs. The program
memory allows efficient learning of additional tasks by building on existing
programs. NPI can also harness the environment (e.g. a scratch pad with
read-write pointers) to cache intermediate results of computation, lessening
the long-term memory burden on recurrent hidden units. In this work we train
the NPI with fully-supervised execution traces; each program has example
sequences of calls to the immediate subprograms conditioned on the input.
Rather than training on a huge number of relatively weak labels, NPI learns
from a small number of rich examples. We demonstrate the capability of our
model to learn several types of compositional programs: addition, sorting, and
canonicalizing 3D models. Furthermore, a single NPI learns to execute these
programs and all 21 associated subprograms. | http://arxiv.org/pdf/1511.06279 | Scott Reed, Nando de Freitas | cs.LG, cs.NE | ICLR 2016 conference submission | null | cs.LG | 20151119 | 20160229 | [
{
"id": "1511.04834"
},
{
"id": "1505.00521"
},
{
"id": "1511.08228"
},
{
"id": "1511.07275"
},
{
"id": "1511.06392"
}
] |
1511.06297 | 33 | Rumelhart, David E, Hinton, Geoffrey E, and Williams, Ronald J. Learning representations by back-propagating errors. Cognitive modeling, 5, 1988.
Silver, David, Lever, Guy, Heess, Nicolas, Degris, Thomas, Wierstra, Daan, and Riedmiller, Martin. Deterministic policy gradient algorithms. In Proceedings of the 31th International Conference on Machine Learning, ICML 2014, Beijing, China, 21-26 June 2014, pp. 387â395, 2014. URL http://jmlr.org/proceedings/papers/v32/silver14.html.
9
# Under review as a conference paper at ICLR 2016
Stollenga, Marijn F, Masci, Jonathan, Gomez, Faustino, and Schmidhuber, J¨urgen. Deep networks with internal selective attention through feedback connections. In Ghahra- mani, Z., Welling, M., Cortes, C., Lawrence, N.D., and Weinberger, K.Q. (eds.), Ad- vances in Neural Information Processing Systems 27, pp. 3545â3553. Curran Associates, Inc., 2014. URL http://papers.nips.cc/paper/5276-deep-networks-with- internal-selective-attention-through-feedback-connections.pdf. | 1511.06297#33 | Conditional Computation in Neural Networks for faster models | Deep learning has become the state-of-art tool in many applications, but the
evaluation and training of deep models can be time-consuming and
computationally expensive. The conditional computation approach has been
proposed to tackle this problem (Bengio et al., 2013; Davis & Arel, 2013). It
operates by selectively activating only parts of the network at a time. In this
paper, we use reinforcement learning as a tool to optimize conditional
computation policies. More specifically, we cast the problem of learning
activation-dependent policies for dropping out blocks of units as a
reinforcement learning problem. We propose a learning scheme motivated by
computation speed, capturing the idea of wanting to have parsimonious
activations while maintaining prediction accuracy. We apply a policy gradient
algorithm for learning policies that optimize this loss function and propose a
regularization mechanism that encourages diversification of the dropout policy.
We present encouraging empirical results showing that this approach improves
the speed of computation without impacting the quality of the approximation. | http://arxiv.org/pdf/1511.06297 | Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, Doina Precup | cs.LG | ICLR 2016 submission, revised | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1502.01852"
},
{
"id": "1502.04623"
},
{
"id": "1502.03044"
}
] |
1511.06342 | 33 | On the other hand, for the games of Krull and Road Runner (although the multitask pretraining does help learning at the start) the effect is not very pronounced. When running Krull we observed that the policy learnt by any DQN regardless of the initialization was a sort of unexpected local maximum. In Krull, the objective is to move between a set of varied minigames and complete each one. One of the minigames, where the player must traverse a spiderweb, gives extremely high reward by simply jumping quickly in a mostly random fashion. What the DQN does is it kills itself on purpose in the initial minigame, runs to the high reward spiderweb minigame, and then simply jumps in the corner of the spiderweb until it is terminated by the spider. Because it is relatively easy to get stuck in this local maximum, and very hard to get out of it (jumping in the minigame gives unproportionally high reward compared to the other minigames), transfer does not really help learning. | 1511.06342#33 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06434 | 33 | Maas, Andrew L, Hannun, Awni Y, and Ng, Andrew Y. Rectiï¬er nonlinearities improve neural network acoustic models. In Proc. ICML, volume 30, 2013.
Mikolov, Tomas, Sutskever, Ilya, Chen, Kai, Corrado, Greg S, and Dean, Jeff. Distributed repre- sentations of words and phrases and their compositionality. In Advances in neural information processing systems, pp. 3111â3119, 2013.
Inceptionism : Going deeper into neural networks. http://googleresearch.blogspot.com/2015/06/ inceptionism-going-deeper-into-neural.html. Accessed: 2015-06-17.
Nair, Vinod and Hinton, Geoffrey E. Rectiï¬ed linear units improve restricted boltzmann machines. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 807â 814, 2010.
12
# Under review as a conference paper at ICLR 2016 | 1511.06434#33 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | In recent years, supervised learning with convolutional networks (CNNs) has
seen huge adoption in computer vision applications. Comparatively, unsupervised
learning with CNNs has received less attention. In this work we hope to help
bridge the gap between the success of CNNs for supervised learning and
unsupervised learning. We introduce a class of CNNs called deep convolutional
generative adversarial networks (DCGANs), that have certain architectural
constraints, and demonstrate that they are a strong candidate for unsupervised
learning. Training on various image datasets, we show convincing evidence that
our deep convolutional adversarial pair learns a hierarchy of representations
from object parts to scenes in both the generator and discriminator.
Additionally, we use the learned features for novel tasks - demonstrating their
applicability as general image representations. | http://arxiv.org/pdf/1511.06434 | Alec Radford, Luke Metz, Soumith Chintala | cs.LG, cs.CV | Under review as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1505.00853"
},
{
"id": "1502.03167"
},
{
"id": "1502.04623"
},
{
"id": "1506.02351"
},
{
"id": "1506.03365"
},
{
"id": "1509.01240"
},
{
"id": "1503.03585"
},
{
"id": "1511.01844"
},
{
"id": "1506.05751"
},
{
"id": "1507.02672"
},
{
"id": "1510.02795"
}
] |
1511.06279 | 34 | # 5 CONCLUSION
We have shown that the NPI can learn programs in very dissimilar environments with different affordances. In the context of sorting we showed that NPI exhibits very strong generalization in comparison to sequence-to-sequence LSTMs. We also showed how a trained NPI with a ï¬xed core can continue to learn new programs without forgetting already learned programs.
ACKNOWLEDGMENTS
We sincerely thank Arun Nair and Ed Grefenstette for helpful suggestions.
9
Published as a conference paper at ICLR 2016
# REFERENCES
Anderson, Michael L. Neural reuse: A fundamental organizational principle of the brain. Behavioral and Brain Sciences, 33:245â266, 8 2010.
Andre, David and Russell, Stuart J. Programmable reinforcement learning agents. In Advances in Neural Information Processing Systems, pp. 1019â1025. 2001.
Banzhaf, Wolfgang, Nordin, Peter, Keller, Robert E, and Francone, Frank D. Genetic programming: An introduction, volume 1. Morgan Kaufmann San Francisco, 1998.
Dietterich, Thomas G. Hierarchical reinforcement learning with the MAXQ value function decom- position. Journal of Artiï¬cial Intelligence Research, 13:227â303, 2000. | 1511.06279#34 | Neural Programmer-Interpreters | We propose the neural programmer-interpreter (NPI): a recurrent and
compositional neural network that learns to represent and execute programs. NPI
has three learnable components: a task-agnostic recurrent core, a persistent
key-value program memory, and domain-specific encoders that enable a single NPI
to operate in multiple perceptually diverse environments with distinct
affordances. By learning to compose lower-level programs to express
higher-level programs, NPI reduces sample complexity and increases
generalization ability compared to sequence-to-sequence LSTMs. The program
memory allows efficient learning of additional tasks by building on existing
programs. NPI can also harness the environment (e.g. a scratch pad with
read-write pointers) to cache intermediate results of computation, lessening
the long-term memory burden on recurrent hidden units. In this work we train
the NPI with fully-supervised execution traces; each program has example
sequences of calls to the immediate subprograms conditioned on the input.
Rather than training on a huge number of relatively weak labels, NPI learns
from a small number of rich examples. We demonstrate the capability of our
model to learn several types of compositional programs: addition, sorting, and
canonicalizing 3D models. Furthermore, a single NPI learns to execute these
programs and all 21 associated subprograms. | http://arxiv.org/pdf/1511.06279 | Scott Reed, Nando de Freitas | cs.LG, cs.NE | ICLR 2016 conference submission | null | cs.LG | 20151119 | 20160229 | [
{
"id": "1511.04834"
},
{
"id": "1505.00521"
},
{
"id": "1511.08228"
},
{
"id": "1511.07275"
},
{
"id": "1511.06392"
}
] |
1511.06297 | 34 | Williams, Ronald J. Simple statistical gradient-following algorithms for connectionist rein- doi: forcement learning. Machine Learning, 8(3-4):229â256, 1992. 10.1007/BF00992696. URL http://dx.doi.org/10.1007/BF00992696. ISSN 0885-6125.
Xu, Kelvin, Ba, Jimmy, Kiros, Ryan, Courville, Aaron, Salakhutdinov, Ruslan, Zemel, Richard, and Bengio, Yoshua. Show, attend and tell: Neural image caption generation with visual attention. arXiv preprint arXiv:1502.03044, 2015.
10
# Under review as a conference paper at ICLR 2016
# A ALGORITHM
The forward pass in our model is done as described in the algorithm below (1), both at train time and test time.
input: x 1 h0 â x 2 u0 â 1 ; 3 for each hidden layer l â 1, ..., L do 4
pl â sigm(Z(l)hlâ1 + d(l)) = Ïl(ul|sl = hlâ1) ul â¼ Ber(pl) ; if blocksize > 1 then
5
// the input mask is ones
// sample Bernoulli from probablities pl
extend ul by repeating each value blocksize times
|
# i | 1511.06297#34 | Conditional Computation in Neural Networks for faster models | Deep learning has become the state-of-art tool in many applications, but the
evaluation and training of deep models can be time-consuming and
computationally expensive. The conditional computation approach has been
proposed to tackle this problem (Bengio et al., 2013; Davis & Arel, 2013). It
operates by selectively activating only parts of the network at a time. In this
paper, we use reinforcement learning as a tool to optimize conditional
computation policies. More specifically, we cast the problem of learning
activation-dependent policies for dropping out blocks of units as a
reinforcement learning problem. We propose a learning scheme motivated by
computation speed, capturing the idea of wanting to have parsimonious
activations while maintaining prediction accuracy. We apply a policy gradient
algorithm for learning policies that optimize this loss function and propose a
regularization mechanism that encourages diversification of the dropout policy.
We present encouraging empirical results showing that this approach improves
the speed of computation without impacting the quality of the approximation. | http://arxiv.org/pdf/1511.06297 | Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, Doina Precup | cs.LG | ICLR 2016 submission, revised | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1502.01852"
},
{
"id": "1502.04623"
},
{
"id": "1502.03044"
}
] |
1511.06342 | 34 | For the games of Gopher and Robotank, we can see that the multitask pretraining does not have any signiï¬cant positive effect. In particular, multitask pretraining for Robotank even seems to slow down learning, providing an example of negative transfer. The task in Robotank is to control a tank turret in a 3D environment to destroy other tanks, so itâs possible that this game is so signiï¬cantly different from any source task (being the only ï¬rst-person 3D game) that the multitask pretraining does not provide any useful prior knowledge.
6 RELATED WORK The idea of using expert networks to guide a single mimic network has been studied in the context of supervised learning, where it is known as model compression. The goal of model compression is to reduce the computational complexity of a large model (or ensemble of large models) to a single smaller mimic network while maintaining as high an accuracy as possible. To obtain high accuracy, the mimic network is trained using rich output targets provided by the experts. These output targets are either the ï¬nal layer logits (Ba & Caruana, 2014) or the high-temperature softmax outputs of the experts (Hinton et al., 2015). Our approach is most similar to the technique of (Hinton et al., 2015)
7
Published as a conference paper at ICLR 2016 | 1511.06342#34 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06434 | 34 | 12
# Under review as a conference paper at ICLR 2016
Netzer, Yuval, Wang, Tao, Coates, Adam, Bissacco, Alessandro, Wu, Bo, and Ng, Andrew Y. Read- ing digits in natural images with unsupervised feature learning. In NIPS workshop on deep learn- ing and unsupervised feature learning, volume 2011, pp. 5. Granada, Spain, 2011.
Oquab, M., Bottou, L., Laptev, I., and Sivic, J. Learning and transferring mid-level image represen- tations using convolutional neural networks. In CVPR, 2014.
Portilla, Javier and Simoncelli, Eero P. A parametric texture model based on joint statistics of complex wavelet coefï¬cients. International Journal of Computer Vision, 40(1):49â70, 2000.
Rasmus, Antti, Valpola, Harri, Honkala, Mikko, Berglund, Mathias, and Raiko, Tapani. Semi- supervised learning with ladder network. arXiv preprint arXiv:1507.02672, 2015. | 1511.06434#34 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | In recent years, supervised learning with convolutional networks (CNNs) has
seen huge adoption in computer vision applications. Comparatively, unsupervised
learning with CNNs has received less attention. In this work we hope to help
bridge the gap between the success of CNNs for supervised learning and
unsupervised learning. We introduce a class of CNNs called deep convolutional
generative adversarial networks (DCGANs), that have certain architectural
constraints, and demonstrate that they are a strong candidate for unsupervised
learning. Training on various image datasets, we show convincing evidence that
our deep convolutional adversarial pair learns a hierarchy of representations
from object parts to scenes in both the generator and discriminator.
Additionally, we use the learned features for novel tasks - demonstrating their
applicability as general image representations. | http://arxiv.org/pdf/1511.06434 | Alec Radford, Luke Metz, Soumith Chintala | cs.LG, cs.CV | Under review as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1505.00853"
},
{
"id": "1502.03167"
},
{
"id": "1502.04623"
},
{
"id": "1506.02351"
},
{
"id": "1506.03365"
},
{
"id": "1509.01240"
},
{
"id": "1503.03585"
},
{
"id": "1511.01844"
},
{
"id": "1506.05751"
},
{
"id": "1507.02672"
},
{
"id": "1510.02795"
}
] |
1511.06279 | 35 | Donnarumma, Francesco, Prevete, Roberto, and Trautteur, Giuseppe. Programming in the brain: A neural network theoretical framework. Connection Science, 24(2-3):71â90, 2012.
Donnarumma, Francesco, Prevete, Roberto, Chersi, Fabian, and Pezzulo, Giovanni. A programmer- interpreter neural network architecture for prefrontal cognitive control. International Journal of Neural Systems, 25(6):1550017, 2015.
Fidler, Sanja, Dickinson, Sven, and Urtasun, Raquel. 3D object detection and viewpoint estimation with a deformable 3D cuboid model. In Advances in neural information processing systems, 2012.
Graves, Alex, Wayne, Greg, and Danihelka, Ivo. Neural Turing machines. arXiv preprint arXiv:1410.5401, 2014.
Hochreiter, Sepp and Schmidhuber, J¨urgen. Long short-term memory. Neural computation, 9(8): 1735â1780, 1997.
Joulin, Armand and Mikolov, Tomas. Inferring algorithmic patterns with stack-augmented recurrent nets. In NIPS, 2015. | 1511.06279#35 | Neural Programmer-Interpreters | We propose the neural programmer-interpreter (NPI): a recurrent and
compositional neural network that learns to represent and execute programs. NPI
has three learnable components: a task-agnostic recurrent core, a persistent
key-value program memory, and domain-specific encoders that enable a single NPI
to operate in multiple perceptually diverse environments with distinct
affordances. By learning to compose lower-level programs to express
higher-level programs, NPI reduces sample complexity and increases
generalization ability compared to sequence-to-sequence LSTMs. The program
memory allows efficient learning of additional tasks by building on existing
programs. NPI can also harness the environment (e.g. a scratch pad with
read-write pointers) to cache intermediate results of computation, lessening
the long-term memory burden on recurrent hidden units. In this work we train
the NPI with fully-supervised execution traces; each program has example
sequences of calls to the immediate subprograms conditioned on the input.
Rather than training on a huge number of relatively weak labels, NPI learns
from a small number of rich examples. We demonstrate the capability of our
model to learn several types of compositional programs: addition, sorting, and
canonicalizing 3D models. Furthermore, a single NPI learns to execute these
programs and all 21 associated subprograms. | http://arxiv.org/pdf/1511.06279 | Scott Reed, Nando de Freitas | cs.LG, cs.NE | ICLR 2016 conference submission | null | cs.LG | 20151119 | 20160229 | [
{
"id": "1511.04834"
},
{
"id": "1505.00521"
},
{
"id": "1511.08228"
},
{
"id": "1511.07275"
},
{
"id": "1511.06392"
}
] |
1511.06297 | 35 | 5
// the input mask is ones
// sample Bernoulli from probablities pl
extend ul by repeating each value blocksize times
|
# i
end // this operation can be performed efficiently as described in section 3.4: hy & f (WO (hy-1 @ w-1) + bY) @ uy
8
# 9 10 end
# Algorithm 1: Single-input forward pass
This algorithm can easily be extended to the minibatch setting by replacing vector operations by matrix operations. Note that in the case of classiï¬cation, the last layer is a softmax layer and is not multiplied by a mask.
# input: x
1 y = forward(x) ; // c+ C(x) = â log P(Â¥|x) Lect rs(Ly + Le) + Av(Lv) + Ar2||Onn ||? + Ax2||Oxll? 3
// given the output of the forward pass
// as in sections 3.2 and 3.3 // update the neural network weights:
4 ÎN N â ÎN N â αâÎN N L // update the policy weights:
for each hidden layer | ⬠1,...,L do 0 â 0 â ax cVo, logp, âaVo,L Se REINFORCE
# âαâθl L ;
// where pl is computed as in algorithm 1
# 7 end | 1511.06297#35 | Conditional Computation in Neural Networks for faster models | Deep learning has become the state-of-art tool in many applications, but the
evaluation and training of deep models can be time-consuming and
computationally expensive. The conditional computation approach has been
proposed to tackle this problem (Bengio et al., 2013; Davis & Arel, 2013). It
operates by selectively activating only parts of the network at a time. In this
paper, we use reinforcement learning as a tool to optimize conditional
computation policies. More specifically, we cast the problem of learning
activation-dependent policies for dropping out blocks of units as a
reinforcement learning problem. We propose a learning scheme motivated by
computation speed, capturing the idea of wanting to have parsimonious
activations while maintaining prediction accuracy. We apply a policy gradient
algorithm for learning policies that optimize this loss function and propose a
regularization mechanism that encourages diversification of the dropout policy.
We present encouraging empirical results showing that this approach improves
the speed of computation without impacting the quality of the approximation. | http://arxiv.org/pdf/1511.06297 | Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, Doina Precup | cs.LG | ICLR 2016 submission, revised | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1502.01852"
},
{
"id": "1502.04623"
},
{
"id": "1502.03044"
}
] |
1511.06342 | 35 | 7
Published as a conference paper at ICLR 2016
which matches the high-temperature outputs of the mimic network with that of the expert network. In addition, we also tried an objective that provides expert guidance at the feature level instead of only at the output level. A similar idea was also explored in the model compression case (Romero et al., 2015), where a deep and thin mimic network used a larger expert networkâs intermediate features as guiding hints during training. In contrast to these model compression techniques, our method is not concerned with decreasing test time computation but instead using experts to provide otherwise unavailable supervision to a mimic network on several distinct tasks. | 1511.06342#35 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06434 | 35 | Sohl-Dickstein, Jascha, Weiss, Eric A, Maheswaranathan, Niru, and Ganguli, Surya. Deep unsuper- vised learning using nonequilibrium thermodynamics. arXiv preprint arXiv:1503.03585, 2015.
Springenberg, Jost Tobias, Dosovitskiy, Alexey, Brox, Thomas, and Riedmiller, Martin. Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806, 2014.
Srivastava, Rupesh Kumar, Masci, Jonathan, Gomez, Faustino, and Schmidhuber, J¨urgen. Under- standing locally competitive networks. arXiv preprint arXiv:1410.1165, 2014.
Theis, L., van den Oord, A., and Bethge, M. A note on the evaluation of generative models. arXiv:1511.01844, Nov 2015. URL http://arxiv.org/abs/1511.01844. | 1511.06434#35 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | In recent years, supervised learning with convolutional networks (CNNs) has
seen huge adoption in computer vision applications. Comparatively, unsupervised
learning with CNNs has received less attention. In this work we hope to help
bridge the gap between the success of CNNs for supervised learning and
unsupervised learning. We introduce a class of CNNs called deep convolutional
generative adversarial networks (DCGANs), that have certain architectural
constraints, and demonstrate that they are a strong candidate for unsupervised
learning. Training on various image datasets, we show convincing evidence that
our deep convolutional adversarial pair learns a hierarchy of representations
from object parts to scenes in both the generator and discriminator.
Additionally, we use the learned features for novel tasks - demonstrating their
applicability as general image representations. | http://arxiv.org/pdf/1511.06434 | Alec Radford, Luke Metz, Soumith Chintala | cs.LG, cs.CV | Under review as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1505.00853"
},
{
"id": "1502.03167"
},
{
"id": "1502.04623"
},
{
"id": "1506.02351"
},
{
"id": "1506.03365"
},
{
"id": "1509.01240"
},
{
"id": "1503.03585"
},
{
"id": "1511.01844"
},
{
"id": "1506.05751"
},
{
"id": "1507.02672"
},
{
"id": "1510.02795"
}
] |
1511.06279 | 36 | Joulin, Armand and Mikolov, Tomas. Inferring algorithmic patterns with stack-augmented recurrent nets. In NIPS, 2015.
Kaiser, Åukasz and Sutskever, Ilya. Neural gpus learn algorithms. arXiv preprint arXiv:1511.08228, 2015.
Kingma, Diederik and Ba, Jimmy. Adam: A method for stochastic optimization. 2015.
Kolter, Zico, Abbeel, Pieter, and Ng, Andrew Y. Hierarchical apprenticeship learning with appli- In Advances in Neural Information Processing Systems, pp. cation to quadruped locomotion. 769â776. 2008.
Kurach, Karol, Andrychowicz, Marcin, and Sutskever, Ilya. Neural random-access machines. arXiv preprint arXiv:1511.06392, 2015.
Mccloskey, Michael and Cohen, Neal J. Catastrophic interference in connectionist networks: The sequential learning problem. In The psychology of learning and motivation, volume 24, pp. 109â 165. 1989. | 1511.06279#36 | Neural Programmer-Interpreters | We propose the neural programmer-interpreter (NPI): a recurrent and
compositional neural network that learns to represent and execute programs. NPI
has three learnable components: a task-agnostic recurrent core, a persistent
key-value program memory, and domain-specific encoders that enable a single NPI
to operate in multiple perceptually diverse environments with distinct
affordances. By learning to compose lower-level programs to express
higher-level programs, NPI reduces sample complexity and increases
generalization ability compared to sequence-to-sequence LSTMs. The program
memory allows efficient learning of additional tasks by building on existing
programs. NPI can also harness the environment (e.g. a scratch pad with
read-write pointers) to cache intermediate results of computation, lessening
the long-term memory burden on recurrent hidden units. In this work we train
the NPI with fully-supervised execution traces; each program has example
sequences of calls to the immediate subprograms conditioned on the input.
Rather than training on a huge number of relatively weak labels, NPI learns
from a small number of rich examples. We demonstrate the capability of our
model to learn several types of compositional programs: addition, sorting, and
canonicalizing 3D models. Furthermore, a single NPI learns to execute these
programs and all 21 associated subprograms. | http://arxiv.org/pdf/1511.06279 | Scott Reed, Nando de Freitas | cs.LG, cs.NE | ICLR 2016 conference submission | null | cs.LG | 20151119 | 20160229 | [
{
"id": "1511.04834"
},
{
"id": "1505.00521"
},
{
"id": "1511.08228"
},
{
"id": "1511.07275"
},
{
"id": "1511.06392"
}
] |
1511.06297 | 36 | # âαâθl L ;
// where pl is computed as in algorithm 1
# 7 end
Algorithm 2: Single-input backward pass
Note that in line 4, some gradients are zeroes, for example the gradient of the L2 regularisation of ÎÏ with respect to ÎN N is zero. Similarly in line 5, the gradient of c with respect to ÎÏ is zero, which is why we have to use REINFORCE to approximate a gradient in the direction that minimizes c.
This algorithm can be extended to the minibatch setting efï¬ciently by replacing the gradient compu- tations in line 7 with the use of the so called R-op, as described in section 3.1, and other computations as is usually done in the minibatch setting with matrix operations.
# B REINFORCE
REINFORCE (Williams, 1992), also known as the likelihood-ratio method, is a policy search algo- rithm. It aims to use gradient methods to improve a given parameterized policy.
In reinforcement learning, a sequence of state-action-reward tuples is described as a trajectory Ï . The objective function of a parameterized policy Ïθ for the cumulative return of a trajectory Ï is described as:
t=1 J(0) =E% {Eris = wo}
11 | 1511.06297#36 | Conditional Computation in Neural Networks for faster models | Deep learning has become the state-of-art tool in many applications, but the
evaluation and training of deep models can be time-consuming and
computationally expensive. The conditional computation approach has been
proposed to tackle this problem (Bengio et al., 2013; Davis & Arel, 2013). It
operates by selectively activating only parts of the network at a time. In this
paper, we use reinforcement learning as a tool to optimize conditional
computation policies. More specifically, we cast the problem of learning
activation-dependent policies for dropping out blocks of units as a
reinforcement learning problem. We propose a learning scheme motivated by
computation speed, capturing the idea of wanting to have parsimonious
activations while maintaining prediction accuracy. We apply a policy gradient
algorithm for learning policies that optimize this loss function and propose a
regularization mechanism that encourages diversification of the dropout policy.
We present encouraging empirical results showing that this approach improves
the speed of computation without impacting the quality of the approximation. | http://arxiv.org/pdf/1511.06297 | Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, Doina Precup | cs.LG | ICLR 2016 submission, revised | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1502.01852"
},
{
"id": "1502.04623"
},
{
"id": "1502.03044"
}
] |
1511.06342 | 36 | Actor-Mimic can also be considered as part of the larger Imitation Learning class of methods, which use expert guidance to teach an agent how to act. One such method, called DAGGER (Ross et al., 2011), is similar to our approach in that it trains a policy to directly mimic an expertâs behaviour while sampling actions from the mimic agent. Actor-Mimic can be considered as an extension of this work to the multitask case. In addition, using a deep neural network to parameterize the policy provides us with several advantages over the more general Imitation Learning framework. First, we can exploit the automatic feature construction ability of deep networks to transfer knowledge to new tasks, as long as the raw data between tasks is in the same form, i.e. pixel data with the same dimen- sions. Second, we can deï¬ne objectives which take into account intermediate representations of the state and not just the policy outputs, for example the feature regression objective which provides a richer training signal to the mimic network than just samples of the expertâs action output. | 1511.06342#36 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06434 | 36 | Vincent, Pascal, Larochelle, Hugo, Lajoie, Isabelle, Bengio, Yoshua, and Manzagol, Pierre-Antoine. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. The Journal of Machine Learning Research, 11:3371â3408, 2010.
Xu, Bing, Wang, Naiyan, Chen, Tianqi, and Li, Mu. Empirical evaluation of rectiï¬ed activations in convolutional network. arXiv preprint arXiv:1505.00853, 2015.
Yu, Fisher, Zhang, Yinda, Song, Shuran, Seff, Ari, and Xiao, Jianxiong. Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365, 2015.
Zeiler, Matthew D and Fergus, Rob. Visualizing and understanding convolutional networks. Computer VisionâECCV 2014, pp. 818â833. Springer, 2014. In
Zhao, Junbo, Mathieu, Michael, Goroshin, Ross, and Lecun, Yann. Stacked what-where auto- encoders. arXiv preprint arXiv:1506.02351, 2015.
13
Under review as a conference paper at ICLR 2016 | 1511.06434#36 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | In recent years, supervised learning with convolutional networks (CNNs) has
seen huge adoption in computer vision applications. Comparatively, unsupervised
learning with CNNs has received less attention. In this work we hope to help
bridge the gap between the success of CNNs for supervised learning and
unsupervised learning. We introduce a class of CNNs called deep convolutional
generative adversarial networks (DCGANs), that have certain architectural
constraints, and demonstrate that they are a strong candidate for unsupervised
learning. Training on various image datasets, we show convincing evidence that
our deep convolutional adversarial pair learns a hierarchy of representations
from object parts to scenes in both the generator and discriminator.
Additionally, we use the learned features for novel tasks - demonstrating their
applicability as general image representations. | http://arxiv.org/pdf/1511.06434 | Alec Radford, Luke Metz, Soumith Chintala | cs.LG, cs.CV | Under review as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1505.00853"
},
{
"id": "1502.03167"
},
{
"id": "1502.04623"
},
{
"id": "1506.02351"
},
{
"id": "1506.03365"
},
{
"id": "1509.01240"
},
{
"id": "1503.03585"
},
{
"id": "1511.01844"
},
{
"id": "1506.05751"
},
{
"id": "1507.02672"
},
{
"id": "1510.02795"
}
] |
1511.06279 | 37 | Mou, Lili, Li, Ge, Liu, Yuxuan, Peng, Hao, Jin, Zhi, Xu, Yan, and Zhang, Lu. Building program vector representations for deep learning. arXiv preprint arXiv:1409.3358, 2014.
Neelakantan, Arvind, Le, Quoc V, and Sutskever, Ilya. Neural programmer: Inducing latent pro- grams with gradient descent. arXiv preprint arXiv:1511.04834, 2015.
OReilly, Randall C., Bhattacharyya, Rajan, Howard, Michael D., and Ketz, Nicholas. Complemen- tary learning systems. Cognitive Science, 38(6):1229â1248, 2014.
Rothkopf, ConstantinA. and Ballard, DanaH. Modular inverse reinforcement learning for visuomo- tor behavior. Biological Cybernetics, 107(4):477â490, 2013.
Rumelhart, D. E., Hinton, G. E., and McClelland, J. L. Parallel distributed processing: Explorations in the microstructure of cognition, vol. 1. chapter A General Framework for Parallel Distributed Processing, pp. 45â76. MIT Press, 1986.
10
Published as a conference paper at ICLR 2016 | 1511.06279#37 | Neural Programmer-Interpreters | We propose the neural programmer-interpreter (NPI): a recurrent and
compositional neural network that learns to represent and execute programs. NPI
has three learnable components: a task-agnostic recurrent core, a persistent
key-value program memory, and domain-specific encoders that enable a single NPI
to operate in multiple perceptually diverse environments with distinct
affordances. By learning to compose lower-level programs to express
higher-level programs, NPI reduces sample complexity and increases
generalization ability compared to sequence-to-sequence LSTMs. The program
memory allows efficient learning of additional tasks by building on existing
programs. NPI can also harness the environment (e.g. a scratch pad with
read-write pointers) to cache intermediate results of computation, lessening
the long-term memory burden on recurrent hidden units. In this work we train
the NPI with fully-supervised execution traces; each program has example
sequences of calls to the immediate subprograms conditioned on the input.
Rather than training on a huge number of relatively weak labels, NPI learns
from a small number of rich examples. We demonstrate the capability of our
model to learn several types of compositional programs: addition, sorting, and
canonicalizing 3D models. Furthermore, a single NPI learns to execute these
programs and all 21 associated subprograms. | http://arxiv.org/pdf/1511.06279 | Scott Reed, Nando de Freitas | cs.LG, cs.NE | ICLR 2016 conference submission | null | cs.LG | 20151119 | 20160229 | [
{
"id": "1511.04834"
},
{
"id": "1505.00521"
},
{
"id": "1511.08228"
},
{
"id": "1511.07275"
},
{
"id": "1511.06392"
}
] |
1511.06297 | 37 | t=1 J(0) =E% {Eris = wo}
11
# Under review as a conference paper at ICLR 2016
where s0 is the initial state of the trajectory. Let R(Ï ) denote the return for trajectory Ï . The gradient of the objective with respect to the parameters of the policy is:
âθJ(θ) = âθEÏθ
VoJ(0) = VoExâ {R(r)} =Vo | P{r|O}R(r)dr = [ vole tro} Rr) ar (8)
Note that the interchange in (8) is only valid under some assumptions (see Silver et al. (2014)).
VoJ(0) = | Vo [P{7|9} R(r)] dr = | [R(7)VoP{r|} + VoR(r)P{r]|9}] dr (9) R(r) t T T T [layrercin + VeR(r)| P{r|0}d = E® {R(r)Vo log P{r|0} + VoR(7)} (10) | 1511.06297#37 | Conditional Computation in Neural Networks for faster models | Deep learning has become the state-of-art tool in many applications, but the
evaluation and training of deep models can be time-consuming and
computationally expensive. The conditional computation approach has been
proposed to tackle this problem (Bengio et al., 2013; Davis & Arel, 2013). It
operates by selectively activating only parts of the network at a time. In this
paper, we use reinforcement learning as a tool to optimize conditional
computation policies. More specifically, we cast the problem of learning
activation-dependent policies for dropping out blocks of units as a
reinforcement learning problem. We propose a learning scheme motivated by
computation speed, capturing the idea of wanting to have parsimonious
activations while maintaining prediction accuracy. We apply a policy gradient
algorithm for learning policies that optimize this loss function and propose a
regularization mechanism that encourages diversification of the dropout policy.
We present encouraging empirical results showing that this approach improves
the speed of computation without impacting the quality of the approximation. | http://arxiv.org/pdf/1511.06297 | Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, Doina Precup | cs.LG | ICLR 2016 submission, revised | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1502.01852"
},
{
"id": "1502.04623"
},
{
"id": "1502.03044"
}
] |
1511.06342 | 37 | Recent work has explored combining expert-guided Imitation Learning and deep neural networks in the single-task case. Guo et al. (2014) use DAGGER with expert guidance provided by Monte-Carlo Tree Search (MCTS) policies to train a deep neural network that improves on the original DQNâs performance. Some disadvantages of using MCTS experts as guidance are that they require both access to the (hidden) RAM state of the emulator as well as an environment model. Another re- lated method is that of guided policy search (Levine & Koltun, 2013), which combines a regularized importance-sampled policy gradient with guiding trajectory samples generated using differential dy- namic programming. The goal in that work was to learn continuous control policies which improved upon the basic policy gradient method, which is prone to poor local minima. | 1511.06342#37 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06279 | 38 | 10
Published as a conference paper at ICLR 2016
Schaul, Tom, Horgan, Daniel, Gregor, Karol, and Silver, David. Universal value function approxi- mators. In International Conference on Machine Learning, 2015.
Schmidhuber, J¨urgen. Learning to control fast-weight memories: An alternative to dynamic recur- rent networks. Neural Computation, 4(1):131â139, 1992.
Schneider, Walter and Chein, Jason M. Controlled and automatic processing: behavior, theory, and biological mechanisms. Cognitive Science, 27(3):525â559, 2003.
Subramanian, Kaushik, Isbell, Charles, and Thomaz, Andrea. Learning options through human interaction. In IJCAI Workshop on Agents Learning Interactively from Human Teachers, 2011.
Sutskever, Ilya and Hinton, Geoffrey E. Using matrices to model symbolic relationship. In Advances in Neural Information Processing Systems, pp. 1593â1600. 2009.
Sutskever, Ilya, Vinyals, Oriol, and Le, Quoc VV. Sequence to sequence learning with neural net- works. In Advances in neural information processing systems, pp. 3104â3112, 2014. | 1511.06279#38 | Neural Programmer-Interpreters | We propose the neural programmer-interpreter (NPI): a recurrent and
compositional neural network that learns to represent and execute programs. NPI
has three learnable components: a task-agnostic recurrent core, a persistent
key-value program memory, and domain-specific encoders that enable a single NPI
to operate in multiple perceptually diverse environments with distinct
affordances. By learning to compose lower-level programs to express
higher-level programs, NPI reduces sample complexity and increases
generalization ability compared to sequence-to-sequence LSTMs. The program
memory allows efficient learning of additional tasks by building on existing
programs. NPI can also harness the environment (e.g. a scratch pad with
read-write pointers) to cache intermediate results of computation, lessening
the long-term memory burden on recurrent hidden units. In this work we train
the NPI with fully-supervised execution traces; each program has example
sequences of calls to the immediate subprograms conditioned on the input.
Rather than training on a huge number of relatively weak labels, NPI learns
from a small number of rich examples. We demonstrate the capability of our
model to learn several types of compositional programs: addition, sorting, and
canonicalizing 3D models. Furthermore, a single NPI learns to execute these
programs and all 21 associated subprograms. | http://arxiv.org/pdf/1511.06279 | Scott Reed, Nando de Freitas | cs.LG, cs.NE | ICLR 2016 conference submission | null | cs.LG | 20151119 | 20160229 | [
{
"id": "1511.04834"
},
{
"id": "1505.00521"
},
{
"id": "1511.08228"
},
{
"id": "1511.07275"
},
{
"id": "1511.06392"
}
] |
1511.06297 | 38 | The product rule of derivatives is used in (9), and the derivative of a log in (10). Since R(Ï ) does not depend on θ directly, the gradient âθR(Ï ) is zero. We end up with this gradient:
âθJ(θ) = EÏθ Ï {R(Ï )âθ log P{Ï |θ}} (11)
Without knowing the transition probabilities, we cannot compute the probability of our trajectories P{Ï |θ}, or their gradient. Fortunately we are in a MDP setting, and we can make use of the Markov property of the trajectories to compute the gradient:
T Vo log P{r|0} = Vo log | p(so) | [ Pfse+ilse, a }0(az|s2) t=1 T = Vo log (so) + )> Vo log P{si41|8, a0} + Vo log mo(ai|s:) (12) t=1 ] Ma Vo log 79 (aus) 1
In (12), p(s0) does not depend on θ, so the gradient is zero. Similarly, P{st+1|st, at} does not depend on θ (not directly at least), so the gradient is also zero. We end up with the gradient of the log policy, which is easy to compute. | 1511.06297#38 | Conditional Computation in Neural Networks for faster models | Deep learning has become the state-of-art tool in many applications, but the
evaluation and training of deep models can be time-consuming and
computationally expensive. The conditional computation approach has been
proposed to tackle this problem (Bengio et al., 2013; Davis & Arel, 2013). It
operates by selectively activating only parts of the network at a time. In this
paper, we use reinforcement learning as a tool to optimize conditional
computation policies. More specifically, we cast the problem of learning
activation-dependent policies for dropping out blocks of units as a
reinforcement learning problem. We propose a learning scheme motivated by
computation speed, capturing the idea of wanting to have parsimonious
activations while maintaining prediction accuracy. We apply a policy gradient
algorithm for learning policies that optimize this loss function and propose a
regularization mechanism that encourages diversification of the dropout policy.
We present encouraging empirical results showing that this approach improves
the speed of computation without impacting the quality of the approximation. | http://arxiv.org/pdf/1511.06297 | Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, Doina Precup | cs.LG | ICLR 2016 submission, revised | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1502.01852"
},
{
"id": "1502.04623"
},
{
"id": "1502.03044"
}
] |
1511.06342 | 38 | A wide variety of methods have also been studied in the context of RL transfer learning (see Tay- lor & Stone (2009) for a more comprehensive review). One related approach is to use a dual state representation with a set of task-speciï¬c and task-independent features known as âproblem-spaceâ and âagent-spaceâ descriptors, respectively. For each source task, a task-speciï¬c value function is learnt on the problem-space descriptors and then these learnt value functions are transferred to a single value function over the agent-space descriptors. Because the agent-space value function is deï¬ned over features which maintain constant semantics across all tasks, this value function can be directly transferred to new tasks. Banerjee & Stone (2007) constructed agent-space features by ï¬rst generating a ï¬xed-depth game tree of the current state, classifying each future state in the tree as either {win, lose, draw, nonterminal} and then coalescing all states which have the same class or subtree. To transfer the source tasks value functions to agent-space, they use a simple weighted av- erage of the source task value functions, where the weight is | 1511.06342#38 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06434 | 38 | We propose to apply standard classiï¬cation metrics to a conditional version of our model, evaluating the conditional distributions learned. We trained a DCGAN on MNIST (splitting off a 10K validation set) as well as a permutation invariant GAN baseline and evaluated the models using a nearest neighbor classiï¬er comparing real data to a set of generated conditional samples. We found that removing the scale and bias parameters from batchnorm produced better results for both models. We speculate that the noise introduced by batchnorm helps the generative models to better explore and generate from the underlying data distribution. The results are shown in Table 3 which compares our models with other techniques. The DCGAN model achieves the same test error as a nearest neighbor classiï¬er ï¬tted on the training dataset - suggesting the DCGAN model has done a superb job at modeling the conditional distributions of this dataset. At one million samples per class, the DCGAN model outperforms Inï¬MNIST (Loosli et al., 2007), a hand developed data augmentation pipeline which uses translations and elastic deformations of training examples. The DCGAN is competitive with a probabilistic generative data augmentation technique utilizing learned per class transformations (Hauberg et al., 2015) while being more general as it directly models the data instead of transformations of the data.
Table 3: Nearest neighbor classiï¬cation results. | 1511.06434#38 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | In recent years, supervised learning with convolutional networks (CNNs) has
seen huge adoption in computer vision applications. Comparatively, unsupervised
learning with CNNs has received less attention. In this work we hope to help
bridge the gap between the success of CNNs for supervised learning and
unsupervised learning. We introduce a class of CNNs called deep convolutional
generative adversarial networks (DCGANs), that have certain architectural
constraints, and demonstrate that they are a strong candidate for unsupervised
learning. Training on various image datasets, we show convincing evidence that
our deep convolutional adversarial pair learns a hierarchy of representations
from object parts to scenes in both the generator and discriminator.
Additionally, we use the learned features for novel tasks - demonstrating their
applicability as general image representations. | http://arxiv.org/pdf/1511.06434 | Alec Radford, Luke Metz, Soumith Chintala | cs.LG, cs.CV | Under review as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1505.00853"
},
{
"id": "1502.03167"
},
{
"id": "1502.04623"
},
{
"id": "1506.02351"
},
{
"id": "1506.03365"
},
{
"id": "1509.01240"
},
{
"id": "1503.03585"
},
{
"id": "1511.01844"
},
{
"id": "1506.05751"
},
{
"id": "1507.02672"
},
{
"id": "1510.02795"
}
] |
1511.06279 | 39 | Sutton, Richard S., Precup, Doina, and Singh, Satinder. Between MDPs and semi-MDPs: A frame- work for temporal abstraction in reinforcement learning. Artiï¬cial Intelligence, 112(1-2):181â 211, 1999.
Vinyals, Oriol, Fortunato, Meire, and Jaitly, Navdeep. Pointer networks. Advances in Neural Infor- mation Processing Systems (NIPS), 2015.
Zaremba, Wojciech and Sutskever, Ilya. Learning to execute. arXiv preprint arXiv:1410.4615, 2014.
Zaremba, Wojciech and Sutskever, Ilya. Reinforcement learning neural turing machines. arXiv preprint arXiv:1505.00521, 2015.
Zaremba, Wojciech, Mikolov, Tomas, Joulin, Armand, and Fergus, Rob. Learning simple algorithms from examples. arXiv preprint arXiv:1511.07275, 2015.
11
Published as a conference paper at ICLR 2016
# 6 APPENDIX
6.1 LISTING OF LEARNED PROGRAMS
Below we list the programs learned by our model: | 1511.06279#39 | Neural Programmer-Interpreters | We propose the neural programmer-interpreter (NPI): a recurrent and
compositional neural network that learns to represent and execute programs. NPI
has three learnable components: a task-agnostic recurrent core, a persistent
key-value program memory, and domain-specific encoders that enable a single NPI
to operate in multiple perceptually diverse environments with distinct
affordances. By learning to compose lower-level programs to express
higher-level programs, NPI reduces sample complexity and increases
generalization ability compared to sequence-to-sequence LSTMs. The program
memory allows efficient learning of additional tasks by building on existing
programs. NPI can also harness the environment (e.g. a scratch pad with
read-write pointers) to cache intermediate results of computation, lessening
the long-term memory burden on recurrent hidden units. In this work we train
the NPI with fully-supervised execution traces; each program has example
sequences of calls to the immediate subprograms conditioned on the input.
Rather than training on a huge number of relatively weak labels, NPI learns
from a small number of rich examples. We demonstrate the capability of our
model to learn several types of compositional programs: addition, sorting, and
canonicalizing 3D models. Furthermore, a single NPI learns to execute these
programs and all 21 associated subprograms. | http://arxiv.org/pdf/1511.06279 | Scott Reed, Nando de Freitas | cs.LG, cs.NE | ICLR 2016 conference submission | null | cs.LG | 20151119 | 20160229 | [
{
"id": "1511.04834"
},
{
"id": "1505.00521"
},
{
"id": "1511.08228"
},
{
"id": "1511.07275"
},
{
"id": "1511.06392"
}
] |
1511.06297 | 39 | In our particular case, the trajectories only have a single step and the reward of the trajectory is the neural network cost C(x), thus the summation dissapears and the gradient found in (2) is found by taking the log of the probability of our Bernoulli sample: âθl C(x) = E {C(x)âθl log Ïθl (u|s)}
Vo,C(x) = E{C(x)Vo, log 7, (uls)} =E {ever oe] ov(1- nih i=l =E {evs Slog [oui + (1â oa) - wih i=l
12 | 1511.06297#39 | Conditional Computation in Neural Networks for faster models | Deep learning has become the state-of-art tool in many applications, but the
evaluation and training of deep models can be time-consuming and
computationally expensive. The conditional computation approach has been
proposed to tackle this problem (Bengio et al., 2013; Davis & Arel, 2013). It
operates by selectively activating only parts of the network at a time. In this
paper, we use reinforcement learning as a tool to optimize conditional
computation policies. More specifically, we cast the problem of learning
activation-dependent policies for dropping out blocks of units as a
reinforcement learning problem. We propose a learning scheme motivated by
computation speed, capturing the idea of wanting to have parsimonious
activations while maintaining prediction accuracy. We apply a policy gradient
algorithm for learning policies that optimize this loss function and propose a
regularization mechanism that encourages diversification of the dropout policy.
We present encouraging empirical results showing that this approach improves
the speed of computation without impacting the quality of the approximation. | http://arxiv.org/pdf/1511.06297 | Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, Doina Precup | cs.LG | ICLR 2016 submission, revised | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1502.01852"
},
{
"id": "1502.04623"
},
{
"id": "1502.03044"
}
] |
1511.06342 | 39 | or subtree. To transfer the source tasks value functions to agent-space, they use a simple weighted av- erage of the source task value functions, where the weight is proportional to the number of times that a speciï¬c agent-space descriptor has been seen during play in that source task. In a related method, Konidaris & Barto (2006) transfer the value function to agent-space by using regression to predict every source tasks problem-space value function from the agent-space descriptors. A drawback of these methods is that the agent- and problem-space descriptors are either hand-engineered or gener- ated from a perfect environment model, thus requiring a signiï¬cant amount of domain knowledge. 7 DISCUSSION | 1511.06342#39 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06434 | 39 | Table 3: Nearest neighbor classiï¬cation results.
Model AlignMNIST Inï¬MNIST Real Data GAN DCGAN (ours) Test Error @50K samples Test Error @10M samples - - 3.1% 6.28% 2.98% 1.4% 2.6% - 5.65% 1.48%
®AYNAHKWONYD Ses eHcwh-O OWN TA ALMLAO 9mY EYWAKwYWHâVY% MENS YLWYAO YON OC ANRWN-~O Peneatw L~o me Oye PrA CG YPâ-âO KwWprâG AQYOYLaAL-O WAN AWW HXG NIST ag? SONY DO Groundtruth =
Figure 9: Side-by-side illustration of (from left-to-right) the MNIST dataset, generations from a baseline GAN, and generations from our DCGAN .
14
# Under review as a conference paper at ICLR 2016
iD. Sees CDikk Bic 3 DET Gr : De
Figure 10: More face generations from our Face DCGAN.
15
# Under review as a conference paper at ICLR 2016 | 1511.06434#39 | Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks | In recent years, supervised learning with convolutional networks (CNNs) has
seen huge adoption in computer vision applications. Comparatively, unsupervised
learning with CNNs has received less attention. In this work we hope to help
bridge the gap between the success of CNNs for supervised learning and
unsupervised learning. We introduce a class of CNNs called deep convolutional
generative adversarial networks (DCGANs), that have certain architectural
constraints, and demonstrate that they are a strong candidate for unsupervised
learning. Training on various image datasets, we show convincing evidence that
our deep convolutional adversarial pair learns a hierarchy of representations
from object parts to scenes in both the generator and discriminator.
Additionally, we use the learned features for novel tasks - demonstrating their
applicability as general image representations. | http://arxiv.org/pdf/1511.06434 | Alec Radford, Luke Metz, Soumith Chintala | cs.LG, cs.CV | Under review as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160107 | [
{
"id": "1505.00853"
},
{
"id": "1502.03167"
},
{
"id": "1502.04623"
},
{
"id": "1506.02351"
},
{
"id": "1506.03365"
},
{
"id": "1509.01240"
},
{
"id": "1503.03585"
},
{
"id": "1511.01844"
},
{
"id": "1506.05751"
},
{
"id": "1507.02672"
},
{
"id": "1510.02795"
}
] |
1511.06279 | 40 | Program ADD ADD1 CARRY LSHIFT RSHIFT ACT BUBBLESORT BUBBLE RESET BSTEP COMPSWAP LSHIFT RSHIFT ACT GOTO HGOTO LGOTO RGOTO VGOTO UGOTO DGOTO ACT RJMP MAX Descriptions Perform multi-digit addition Perform single-digit addition Mark a 1 in the carry row one unit left Shift a speciï¬ed pointer one step left Shift a speciï¬ed pointer one step right Move a pointer or write to the scratch pad Perform bubble sort (ascending order) Perform one sweep of pointers left to right Move both pointers all the way left Conditionally swap and advance pointers Conditionally swap two elements Shift a speciï¬ed pointer one step left Shift a speciï¬ed pointer one step right Swap two values at pointer locations or move a pointer Change 3D car pose to match the target Move horizontally to the target angle Move left to match the target angle Move right to match the target angle Move vertically to the target elevation Move up to match the target elevation Move down to match the target elevation Move camera 15⦠up, down, left or right Move all pointers to the rightmost posiiton Find maximum element of an array Calls ADD1, LSHIFT ACT, CARRY ACT ACT ACT - | 1511.06279#40 | Neural Programmer-Interpreters | We propose the neural programmer-interpreter (NPI): a recurrent and
compositional neural network that learns to represent and execute programs. NPI
has three learnable components: a task-agnostic recurrent core, a persistent
key-value program memory, and domain-specific encoders that enable a single NPI
to operate in multiple perceptually diverse environments with distinct
affordances. By learning to compose lower-level programs to express
higher-level programs, NPI reduces sample complexity and increases
generalization ability compared to sequence-to-sequence LSTMs. The program
memory allows efficient learning of additional tasks by building on existing
programs. NPI can also harness the environment (e.g. a scratch pad with
read-write pointers) to cache intermediate results of computation, lessening
the long-term memory burden on recurrent hidden units. In this work we train
the NPI with fully-supervised execution traces; each program has example
sequences of calls to the immediate subprograms conditioned on the input.
Rather than training on a huge number of relatively weak labels, NPI learns
from a small number of rich examples. We demonstrate the capability of our
model to learn several types of compositional programs: addition, sorting, and
canonicalizing 3D models. Furthermore, a single NPI learns to execute these
programs and all 21 associated subprograms. | http://arxiv.org/pdf/1511.06279 | Scott Reed, Nando de Freitas | cs.LG, cs.NE | ICLR 2016 conference submission | null | cs.LG | 20151119 | 20160229 | [
{
"id": "1511.04834"
},
{
"id": "1505.00521"
},
{
"id": "1511.08228"
},
{
"id": "1511.07275"
},
{
"id": "1511.06392"
}
] |
1511.06342 | 40 | In this paper we deï¬ned Actor-Mimic, a novel method for training a single deep policy network over a set of related source tasks. We have shown that a network trained using Actor-Mimic is capable of reaching expert performance on many games simultaneously, while having the same model complexity as a single expert. In addition, using Actor-Mimic as a multitask pretraining phase can signiï¬cantly improve learning speed in a set of target tasks. This demonstrates that the features learnt over the source tasks can generalize to new target tasks, given a sufï¬cient level of similarity between source and target tasks. A direction of future work is to develop methods that can enable a targeted knowledge transfer from source tasks by identifying related source tasks for the given target task. Using targeted knowledge transfer can potentially help in cases of negative transfer observed in our experiments.
Acknowledgments: This work was supported by Samsung and NSERC.
8
Published as a conference paper at ICLR 2016
# REFERENCES
Ba, Jimmy and Caruana, Rich. Do deep nets really need to be deep? In Advances in Neural Information Processing Systems, pp. 2654â2662, 2014.
Banerjee, Bikramjit and Stone, Peter. General game learning using knowledge transfer. In Interna- tional Joint Conferences on Artiï¬cial Intelligence, pp. 672â677, 2007. | 1511.06342#40 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06342 | 41 | Bellemare, Marc G., Naddaf, Yavar, Veness, Joel, and Bowling, Michael. The arcade learning envi- ronment: An evaluation platform for general agents. Journal of Artiï¬cial Intelligence Research, 47:253â279, 2013.
Bertsekas, Dimitri P. Dynamic programming and optimal control, volume 1. Athena Scientiï¬c Belmont, MA, 1995.
Guo, Xiaoxiao, Singh, Satinder, Lee, Honglak, Lewis, Richard L, and Wang, Xiaoshi. Deep learning for real-time atari game play using ofï¬ine monte-carlo tree search planning. In Advances in Neural Information Processing Systems 27, pp. 3338â3346, 2014.
Hinton, Geoffrey, Vinyals, Oriol, and Dean, Jeff. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
Kingma, Diederik P. and Ba, Jimmy. Adam: A method for stochastic optimization. In International Conference on Learning Representations, 2015.
Konidaris, George and Barto, Andrew G. Autonomous shaping: Knowledge transfer in reinforce- In Proceedings of the 23rd international conference on Machine learning, pp. ment learning. 489â496, 2006. | 1511.06342#41 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06279 | 42 | Table 2: Programs learned for addition, sorting and 3D car canonicalization. Note the the ACT program has a different effect depending on the environment and on the passed-in arguments.
6.2 GENERATED EXECUTION TRACE OF BUBBLESORT Figure 8 shows the sequence of program calls for BUBBLESORT. Pointers 1 and 2 are used to imFigure 8: Generated execution trace from our trained NPI sorting the array [9,2,5].
# BUBBLESORT | 1511.06279#42 | Neural Programmer-Interpreters | We propose the neural programmer-interpreter (NPI): a recurrent and
compositional neural network that learns to represent and execute programs. NPI
has three learnable components: a task-agnostic recurrent core, a persistent
key-value program memory, and domain-specific encoders that enable a single NPI
to operate in multiple perceptually diverse environments with distinct
affordances. By learning to compose lower-level programs to express
higher-level programs, NPI reduces sample complexity and increases
generalization ability compared to sequence-to-sequence LSTMs. The program
memory allows efficient learning of additional tasks by building on existing
programs. NPI can also harness the environment (e.g. a scratch pad with
read-write pointers) to cache intermediate results of computation, lessening
the long-term memory burden on recurrent hidden units. In this work we train
the NPI with fully-supervised execution traces; each program has example
sequences of calls to the immediate subprograms conditioned on the input.
Rather than training on a huge number of relatively weak labels, NPI learns
from a small number of rich examples. We demonstrate the capability of our
model to learn several types of compositional programs: addition, sorting, and
canonicalizing 3D models. Furthermore, a single NPI learns to execute these
programs and all 21 associated subprograms. | http://arxiv.org/pdf/1511.06279 | Scott Reed, Nando de Freitas | cs.LG, cs.NE | ICLR 2016 conference submission | null | cs.LG | 20151119 | 20160229 | [
{
"id": "1511.04834"
},
{
"id": "1505.00521"
},
{
"id": "1511.08228"
},
{
"id": "1511.07275"
},
{
"id": "1511.06392"
}
] |
1511.06342 | 42 | Levine, Sergey and Koltun, Vladlen. Guided policy search. In Proceedings of the 30th international conference on Machine Learning, 2013.
Levine, Sergey, Finn, Chelsea, Darrell, Trevor, and Abbeel, Pieter. End-to-end training of deep visuomotor policies. CoRR, abs/1504.00702, 2015.
Lillicrap, Timothy P., Hunt, Jonathan J., Pritzel, Alexander, Heess, Nicholas, Erez, Tom, Tassa, Yuval, Silver, David, and Wierstra, Daan. Continuous control with deep reinforcement learning. CoRR, abs/1509.02971, 2015.
Mnih, Volodymyr, Kavukcuoglu, Koray, Silver, David, Rusu, Andrei A., Veness, Joel, Bellemare, Marc G., Graves, Alex, Riedmiller, Martin, Fidjeland, Andreas K., Ostrovski, Georg, Petersen, Stig, Beattie, Charles, Sadik, Amir, Antonoglou, Ioannis, King, Helen, Kumaran, Dharshan, Wier- stra, Daan, Legg, Shane, and Hassabis, Demis. Human-level control through deep reinforcement learning. Nature, 518(7540):529â533, 2015. | 1511.06342#42 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06279 | 43 | # BUBBLESORT
BUBBLE BUBBLE BUBBLE PTR 2 RIGHT PTR 2 RIGHT PTR 2 RIGHT BSTEP BSTEP BSTEP COMPSWAP COMPSWAP COMPSWAP SWAP 1 2 RSHIFT RSHIFT RSHIFT PTR 1 RIGHT PTR 1 RIGHT PTR 1 RIGHT PTR 2 RIGHT PTR 2 RIGHT PTR 2 RIGHT BSTEP BSTEP BSTEP COMPSWAP COMPSWAP COMPSWAP SWAP 1 2 RSHIFT RSHIFT RSHIFT PTR 1 RIGHT PTR 1 RIGHT PTR 1 RIGHT PTR 2 RIGHT PTR 2 RIGHT PTR 2 RIGHT RESET RESET RESET LSHIFT LSHIFT LSHIFT PTR 1 LEFT PTR 1 LEFT PTR 1 LEFT PTR 2 LEFT PTR 2 LEFT PTR 2 LEFT LSHIFT LSHIFT LSHIFT PTR 1 LEFT PTR 1 LEFT PTR 1 LEFT PTR 2 LEFT PTR 2 LEFT PTR 2 LEFT PTR 3 RIGHT PTR 3 RIGHT PTR 3 RIGHT | 1511.06279#43 | Neural Programmer-Interpreters | We propose the neural programmer-interpreter (NPI): a recurrent and
compositional neural network that learns to represent and execute programs. NPI
has three learnable components: a task-agnostic recurrent core, a persistent
key-value program memory, and domain-specific encoders that enable a single NPI
to operate in multiple perceptually diverse environments with distinct
affordances. By learning to compose lower-level programs to express
higher-level programs, NPI reduces sample complexity and increases
generalization ability compared to sequence-to-sequence LSTMs. The program
memory allows efficient learning of additional tasks by building on existing
programs. NPI can also harness the environment (e.g. a scratch pad with
read-write pointers) to cache intermediate results of computation, lessening
the long-term memory burden on recurrent hidden units. In this work we train
the NPI with fully-supervised execution traces; each program has example
sequences of calls to the immediate subprograms conditioned on the input.
Rather than training on a huge number of relatively weak labels, NPI learns
from a small number of rich examples. We demonstrate the capability of our
model to learn several types of compositional programs: addition, sorting, and
canonicalizing 3D models. Furthermore, a single NPI learns to execute these
programs and all 21 associated subprograms. | http://arxiv.org/pdf/1511.06279 | Scott Reed, Nando de Freitas | cs.LG, cs.NE | ICLR 2016 conference submission | null | cs.LG | 20151119 | 20160229 | [
{
"id": "1511.04834"
},
{
"id": "1505.00521"
},
{
"id": "1511.08228"
},
{
"id": "1511.07275"
},
{
"id": "1511.06392"
}
] |
1511.06342 | 43 | Perkins, Theodore J and Precup, Doina. A convergent form of approximate policy iteration. Advances in neural information processing systems, pp. 1595â1602, 2002. In
Robbins, Herbert and Monro, Sutton. A stochastic approximation method. The annals of mathemat- ical statistics, pp. 400â407, 1951.
Romero, Adriana, Ballas, Nicolas, Kahou, Samira Ebrahimi, Chassang, Antoine, Gatta, Carlo, and In International Conference on Learning Bengio, Yoshua. Fitnets: Hints for thin deep nets. Representations, 2015.
Ross, Stephane, Gordon, Geoffrey, and Bagnell, Andrew. A reduction of imitation learning and structured prediction to no-regret online learning. Journal of Machine Learning Research, 15: 627â635, 2011.
Seneta, E. Sensitivity analysis, ergodicity coefï¬cients, and rank-one updates for ï¬nite markov chains. Numerical solution of Markov chains, 8:121â129, 1991.
Sutton, Richard S. and Barto, Andrew G. Reinforcement learning: An introduction. MIT Press Cambridge, 1998.
Taylor, Matthew E and Stone, Peter. Transfer learning for reinforcement learning domains: A survey. The Journal of Machine Learning Research, 10:1633â1685, 2009.
9 | 1511.06342#43 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06279 | 44 | plement the âbubbleâ operation involving the comparison and swapping of adjacent array elements. The third pointer (referred to in the trace as âPTR 3â) is used to count the number of calls to BUB- BLE. After every call to RESET the swapping pointers are moved to the beginning of the array and the counting pointer is advanced by 1. When it has reached the end of the scratch pad, the model learns to halt execution of BUBBLESORT.
12
Published as a conference paper at ICLR 2016
6.3 ADDITIONAL EXPERIMENT ON ADDITION GENERALIZATION
Based on reviewer feedback, we conducted an additional comparison of NPI and sequence-to- sequence models for the addition task, to evaluate the generalization ability. we implemented addi- tion in a sequence to sequence model, training to model sequences of the following form, e.g. for â90 + 160 = 250â we represent the sequence as:
90X160X250 | 1511.06279#44 | Neural Programmer-Interpreters | We propose the neural programmer-interpreter (NPI): a recurrent and
compositional neural network that learns to represent and execute programs. NPI
has three learnable components: a task-agnostic recurrent core, a persistent
key-value program memory, and domain-specific encoders that enable a single NPI
to operate in multiple perceptually diverse environments with distinct
affordances. By learning to compose lower-level programs to express
higher-level programs, NPI reduces sample complexity and increases
generalization ability compared to sequence-to-sequence LSTMs. The program
memory allows efficient learning of additional tasks by building on existing
programs. NPI can also harness the environment (e.g. a scratch pad with
read-write pointers) to cache intermediate results of computation, lessening
the long-term memory burden on recurrent hidden units. In this work we train
the NPI with fully-supervised execution traces; each program has example
sequences of calls to the immediate subprograms conditioned on the input.
Rather than training on a huge number of relatively weak labels, NPI learns
from a small number of rich examples. We demonstrate the capability of our
model to learn several types of compositional programs: addition, sorting, and
canonicalizing 3D models. Furthermore, a single NPI learns to execute these
programs and all 21 associated subprograms. | http://arxiv.org/pdf/1511.06279 | Scott Reed, Nando de Freitas | cs.LG, cs.NE | ICLR 2016 conference submission | null | cs.LG | 20151119 | 20160229 | [
{
"id": "1511.04834"
},
{
"id": "1505.00521"
},
{
"id": "1511.08228"
},
{
"id": "1511.07275"
},
{
"id": "1511.06392"
}
] |
1511.06342 | 44 | Taylor, Matthew E and Stone, Peter. Transfer learning for reinforcement learning domains: A survey. The Journal of Machine Learning Research, 10:1633â1685, 2009.
9
Published as a conference paper at ICLR 2016
# APPENDIX A PROOF OF THEOREM 1
Lemma 2. For any two policies t!,xâ, the stationary distributions over the states under the policies are bounded: \|D,1 â Dz2\| < cp||t! â x? ||, for some cp > 0.
Proof. Let T 1 and T 2 be the two transition matrices under the stationary distributions DÏ1 , DÏ2. For any ij elements T 1
IT; -â T3ll = | SS p(sila, 85) (w'(a|s;) â e's] (9)
(10)
<|Al|l7" (als;) â 7?(a|s5)|| <|Al||7? â 1? lI-0.
(11) | 1511.06342#44 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06279 | 45 | 90X160X250
For the simple Seq2Seq baseline above (same number of LSTM layers and hidden units as NPI), we observed that the model could predict one or two digits reliably, but did not generalize even up to 20-digit addition. However, we are aware that others have gotten multi-digit addition of the above form to work to some extent with curriculum learning (Zaremba & Sutskever, 2014). In order to make a more competitive baseline, we helped Seq2Seq in two ways: 1) reverse input digits and stack the two numbers on top of each other to form a 2-channel sequence, and 2) reverse input digits and generate reversed output digits immediately at each time step.
In the approach of 1), the seq2seq model schematically looks like this:
output: XXXX250 input 1: 090XXXX input 2: 061XXXX
In the approach of 2), the sequence looks like this:
# output: 052 input 1: 090 input 2: 061
Both 1) which we call s2s-stacked and 2) which we call s2s-easy are much stronger competitors to NPI than even the proposed addition baseline. We compare the generalization performance of NPI to these baselines in the ï¬gure below: | 1511.06279#45 | Neural Programmer-Interpreters | We propose the neural programmer-interpreter (NPI): a recurrent and
compositional neural network that learns to represent and execute programs. NPI
has three learnable components: a task-agnostic recurrent core, a persistent
key-value program memory, and domain-specific encoders that enable a single NPI
to operate in multiple perceptually diverse environments with distinct
affordances. By learning to compose lower-level programs to express
higher-level programs, NPI reduces sample complexity and increases
generalization ability compared to sequence-to-sequence LSTMs. The program
memory allows efficient learning of additional tasks by building on existing
programs. NPI can also harness the environment (e.g. a scratch pad with
read-write pointers) to cache intermediate results of computation, lessening
the long-term memory burden on recurrent hidden units. In this work we train
the NPI with fully-supervised execution traces; each program has example
sequences of calls to the immediate subprograms conditioned on the input.
Rather than training on a huge number of relatively weak labels, NPI learns
from a small number of rich examples. We demonstrate the capability of our
model to learn several types of compositional programs: addition, sorting, and
canonicalizing 3D models. Furthermore, a single NPI learns to execute these
programs and all 21 associated subprograms. | http://arxiv.org/pdf/1511.06279 | Scott Reed, Nando de Freitas | cs.LG, cs.NE | ICLR 2016 conference submission | null | cs.LG | 20151119 | 20160229 | [
{
"id": "1511.04834"
},
{
"id": "1505.00521"
},
{
"id": "1511.08228"
},
{
"id": "1511.07275"
},
{
"id": "1511.06392"
}
] |
1511.06342 | 45 | <|Al|l7" (als;) â 7?(a|s5)|| <|Al||7? â 1? lI-0.
(11)
The above bound for any ijââ elements implies the Euclidean distance of the transition matrices is also upper bounded ||T'? â T?|| < |S||.A|||7+ â 7? |]. ) has shown that ||D1 â D,2|| < yor llT? âT?||.0, where A! is the largest eigenvalue 0 . Hence, there is a constant cp > 0 such that ||D,.1 â D,2|| < cp||x1 â 2? ||.
Lemma 3. For any two softmax policy Pg:, P92 matrices from the linear function approximator, || Pox â Po2|| < ey|| 20! â 06 ||, for some cy > 0.
Proof. Note that the ith row jth column element p(aj|si) in a softmax policy matrix P is computed by the softmax transformation on the Q function:
e(si.03) Sy e@ (sian) â k Diy = p(aj|si) = softmar( Q(si.4))) = (12) | 1511.06342#45 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06342 | 46 | e(si.03) Sy e@ (sian) â k Diy = p(aj|si) = softmar( Q(si.4))) = (12)
Because the softmax function is a monotonically increasing element-wise function on matrices, the Euclidean distance of the softmax transformation is upper bounded by the largest Jacobian in the domain of the softmax function. Namely, for câ, = maxzepom softmaz || dsoftman(s) |
\|softmaz(x') â softmax(x?)|| < c,||a! â x?||,Vx', x? ⬠Dom softmaz. (13)
By bounding the elements in P matrix, it gives || Pp: â Po2|| < cs||Qo. â Qeel| = csl|eo! -â 06? |),
Theorem 1. Assume the Markov decision process is irreducible and aperiodic for any policy 7 induced by the T operator and is Lipschitz continuous with a constant C., the sequence of policies and model parameters generated by the iterative algorithm above converges almost surely to a unique solution x* and 0*. | 1511.06342#46 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06279 | 47 | Figure 9: Comparing NPI and Seq2Seq variants on addition generalization to longer sequences.
We found that NPI trained on 32 examples for problem lengths 1,...,20 generalizes with 100% ac- curacy to all the lengths we tried (up to 3000). s2s-easy trained on twice as many examples gen- eralizes to just over length 2000 problems. s2s-stacked barely generalizes beyond 5, even with far more data. This suggests that locality of computation makes a large impact on generalization per- formance. Even when we carefully ordered and stacked the input numbers for Seq2Seq, NPI still had an edge in performance. In contrast to Seq2Seq, NPI is taught (supervised for now) to move its pointers so that the key operations (e.g. single digit add, carry) can be done using only local information, and this appears to help generalization.
13 | 1511.06279#47 | Neural Programmer-Interpreters | We propose the neural programmer-interpreter (NPI): a recurrent and
compositional neural network that learns to represent and execute programs. NPI
has three learnable components: a task-agnostic recurrent core, a persistent
key-value program memory, and domain-specific encoders that enable a single NPI
to operate in multiple perceptually diverse environments with distinct
affordances. By learning to compose lower-level programs to express
higher-level programs, NPI reduces sample complexity and increases
generalization ability compared to sequence-to-sequence LSTMs. The program
memory allows efficient learning of additional tasks by building on existing
programs. NPI can also harness the environment (e.g. a scratch pad with
read-write pointers) to cache intermediate results of computation, lessening
the long-term memory burden on recurrent hidden units. In this work we train
the NPI with fully-supervised execution traces; each program has example
sequences of calls to the immediate subprograms conditioned on the input.
Rather than training on a huge number of relatively weak labels, NPI learns
from a small number of rich examples. We demonstrate the capability of our
model to learn several types of compositional programs: addition, sorting, and
canonicalizing 3D models. Furthermore, a single NPI learns to execute these
programs and all 21 associated subprograms. | http://arxiv.org/pdf/1511.06279 | Scott Reed, Nando de Freitas | cs.LG, cs.NE | ICLR 2016 conference submission | null | cs.LG | 20151119 | 20160229 | [
{
"id": "1511.04834"
},
{
"id": "1505.00521"
},
{
"id": "1511.08228"
},
{
"id": "1511.07275"
},
{
"id": "1511.06392"
}
] |
1511.06342 | 47 | Proof. We follow a similar contraction argument made in|Perkins & Precup|(2002) , and show the it- erative algorithm is a contraction process. Namely, for any two policies 7* and 7, the learning algo- rithm above produces new policies '(Qg:), [(Qo2) after one iteration, where ||'(Qo:)âF'(Qog2)|| < B|\x! â 1? ||. Here || - || is the Euclidean norm and f ⬠(0, 1).
By Lipschtiz continuity,
IIF(Qo.) â P(Qez)|| <cellQo1 â Qo2|| = cell 0" â &6|| (14) <ce||B||||6" â 67||- (15)
10
Published as a conference paper at ICLR 2016
Let θ1 and θ2 be the stationary points of Eq. (7) under Ï1 and Ï2. That is, âθ1 = âθ2 = 0 respectively. Rearranging Eq. (8) gives,
# 1 λ 1 λ
|" â | =]? Daa (Poy =.) â ©" Dya( Poe ~The) (16) | 1511.06342#47 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06342 | 48 | # 1 λ 1 λ
|" â | =]? Daa (Poy =.) â ©" Dya( Poe ~The) (16)
=> 67 (D2 â Dy)Me + 87 D1 Por â ®T Daa Por + 87 Dai Poo â ®7 D2 Pp? || (17)
# 1 λ 1 λ
=>
61 (D,2 â Dy). + ©" Djs (Pp â Po2) + 87 (Dy â D2) Pp2| (18) [27 ||| De â D2] lel] + 27 |ll_Dall|LPox â Poe ll + [27 ||Dx â Deal ll Poe
â¤
<ellx! â 1? |]. (20)
The last inequality is given by Lemma 2 and 3 and the compactness of ®. For a Lipschtiz constant Ce > ¢, there exists a 9 such that ||T'(Qo:)âI'(Qo2) || < ||! âx2||. Hence, the sequence of policies generated by the algorithm converges almost surely to a unique fixed point 7* from Lemma|I]and the Contraction Mapping Theorem|Bertsekas|(1995). Furthermore, the model parameters converge w.p. | to a stationary point 0* under the fixed point policy 7*. | 1511.06342#48 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06342 | 50 | All of our Actor-Mimic Networks (AMNs) were trained using the Adam (Kingma & Bal |2015) ding to optimization algorithm. The AMNs have a single 18-unit output, with each output correspon one of the 18 possible Atari player actions. Having the full 18-action output simplifies the multitask case when each game has a different subset of valid actions. While playing a certain game, we mask out AMN action outputs that are not valid for that game and take the softmax over only the subset of valid actions. We use a replay memory for each game to reduce correlations between successive frames and stabilize network training. Because the memory requirements of having the standard replay memory size of 1,000,000 frames for each game are prohibitive when we are training over many source games, for AMNs we use a per-game 100,000 frame replay memory. AMN training was stable even with only a per-game equivalent of a tenth of the replay memory size of the DQN experts. For the transfer experiments with the feature regression objective, we set the scaling parameter ( to 0.01 and the feature prediction network f; was set to a linear projection from the AMN features to the i*â expert features. For the policy regression objective, we use | 1511.06342#50 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06342 | 51 | and the feature prediction network f; was set to a linear projection from the AMN features to the i*â expert features. For the policy regression objective, we use a softmax temperature of | in all cases. Additionally, during training for all AMNs we use an e-greedy policy with ⬠set to a constant 0.1. Annealing ⬠from | did not provide any noticeable benefit. During training, we choose actions based on the AMN and not the expert DQN. We do not use weight decay during AMN training as we empirically found that it did not provide any large benefits. | 1511.06342#51 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06342 | 52 | For the experiments using the DQN algorithm, we optimize the networks with RMSProp. Since the DQNs are trained on a single game their output layers only contain the player actions that are valid in the particular game that they are trained on. The experts guiding the AMNs used the same architecture, hyperparameters and training procedure as that of Mnih et al. (2015). We use the full 1,000,000 frame replay memory when training any DQN.
# APPENDIX C MULTITASK DQN BASELINE RESULTS
As a baseline, we trained DQN networks over 8 games simultaneously to test their performance against the Actor-Mimic method. We tried two different architectures, the ï¬rst is using the basic DQN procedure on all 8 games. This network has a single 18 action output shared by all games, but when we train or test in a particular game, we mask out and ignore the action values from actions that are invalid for that particular game. This architecture is denoted the Multitask DQN (MDQN). The second architecture is a DQN but where each game has a separate fully-connected feature layer and action output. In this architecture only the convolutions are shared between games, and thus the features and action values are completely separate. This was to try to mitigate the destabilizing
11
(19)
Published as a conference paper at ICLR 2016 | 1511.06342#52 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06342 | 53 | 11
(19)
Published as a conference paper at ICLR 2016
BOXING a. 10% ATLANTIS 3 FS BREAKOUT _. «104 CRAZY CLIMBER 8 5 a |âAMN, |âDQN | -MDQN A x iS 7 & 8 a AJ â \ , - \ Ae nee 8 ole o V o ENDURO wy 3 SEAQUEST a SPACE INVADERS 3 8 8 g 8 8 8 By i & | 8 . a nd ° 8 â Vy ° 0 20 40 0 20 40 0 20 40 0 20 40
Figure 2: The Actor-Mimic, expert DQN, and Multitask DQN (MDQN) training curves for 40 training epochs for each of the 8 games. A training epoch is 250,000 frames and for each training epoch we evaluate the networks with a testing epoch that lasts 125,000 frames. We report AMN, expert DQN and MDQN test reward for each testing epoch. In the testing epoch we use « = 0.05 in the e-greedy policy. The y-axis is the average unscaled episode reward during a testing epoch. | 1511.06342#53 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06342 | 54 | BOXING BREAKOUT , x10% |âAMN |âDQN | -MCDQN ATLANTIS S 3 0b _. x104 CRAZY CLIMBER a 002 SL ° 8 ok ° 3 » g SEAQUEST a 3 8 8 8 8 8 8 io i ol. & oly okt 0 20 40 0 20 40 0 40 0 20 40
Figure 3: The Actor-Mimic, expert DQN, and Multitask Convolutions DQN (MCDQN) training curves for 40 training epochs for each of the 8 games. A training epoch is 250,000 frames and for each training epoch we evaluate the networks with a testing epoch that lasts 125,000 frames. We report AMN, expert DQN and MCDQN test reward for each testing epoch. In the testing epoch we use ⬠= 0.05 in the e-greedy policy. The y-axis is the average unscaled episode reward during a testing epoch.
effect that the different value scales of each game had during learning. This architecture is denoted the Multitask Convolutions DQN (MCDQN). | 1511.06342#54 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06342 | 55 | effect that the different value scales of each game had during learning. This architecture is denoted the Multitask Convolutions DQN (MCDQN).
The results for the MDQN and MCDQN are shown in Figures 2 and 3, respectively. From the ï¬gures, we can observe that the AMN is far more stable during training as well as being consistently higher in performance than either the MDQN or MCDQN methods. In addition, it can be seen that the MDQN and MCDQN will often focus on performing reasonably well on a small subset of the source games, such as on Boxing and Enduro, while making little to no progress in others, such as Breakout or Pong. Between the MDQN and MCDQN, we can see that the MCDQN hardly improves results even though it has signiï¬cantly larger computational cost that scales linearly with the number of source games. | 1511.06342#55 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06342 | 56 | For the speciï¬c details of the architectures we tested, for the MDQN the architecture was: 8x8x4x32- 4 1 â 4x4x32x64-2 â 3x3x64x64-1 â 512 fully-connected units â 18 actions. This is exactly the same network architecture as used for the 8 game AMN in Section 5.1. For the MCDQN, the bottom convolutional layers were the same as the MDQN, except there are 8 parallel subnetworks on top of the convolutional layers. These game-speciï¬c subnetworks had the architecture: 512 fully- connected units â 18 actions. All layers except the action outputs were followed with a rectiï¬er non-linearity.
1 Here we represent convolutional layers as WxWxCxN-S, where W is the width of the (square) convolution kernel, C is the number of input images, N is the number of ï¬lter maps and S is the convolution stride.
12
Published as a conference paper at ICLR 2016
# APPENDIX D ACTOR-MIMIC NETWORK MULTITASK RESULTS FOR TRANSFER PRETRAINING | 1511.06342#56 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06342 | 57 | 12
Published as a conference paper at ICLR 2016
# APPENDIX D ACTOR-MIMIC NETWORK MULTITASK RESULTS FOR TRANSFER PRETRAINING
The network used for transfer consisted of the following architecture: 8x8x4x256-4 1 â 4x4x256x512-2 â 3x3x512x512-1 â 3x3x512x512-1 â 2048 fully-connected units â 1024 fully- connected units â 18 actions. All layers except the ï¬nal one were followed with a rectiï¬er non- linearity.
13
Published as a conference paper at ICLR 2016
# atlantis
105 | 1511.06342#57 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06342 | 58 | 6X assault 5000 410000 beam rider . ââAMN-policy 5/ J âDQN 4000 +}|âDQN-Max 8000 âDQN-Mean 4 3000 6000 3f 4 2000 2 4000 1000 i} 2000 0 1 1 0 0 ie} 5 10 15 0 0 4 . boxing 42 10 ; crazy climber x104 100 - | âââs === ==] 10 2.5 Y 50 } 8 ; 6 / ok ~ 4 2 P| -50 Lâ 0) 5 10 15 0 enduro fishing derby - kangaroo 1000 20 10000 9) 0 â | 800 /o~ / â 8000 a / . -20 \ 600 fo { 6000 _7 40F | 400 f | } 4000 J -60 | 2 7 / | ; y 200 yo âsol | / 2000 7 / wa / _/ 0 ââ ~100 - - 9 L_+_ââ 0 5 10 15 0 5 10 15 0 pong name this game 30 seaquest 10000 g : 7000 20 ==} 6000 8000 y, \ 5000 1 ~/ | 10 ( \ 6000 } , o\ | _ Sf 4000 1 } i¢) } \ | â / | \] ~ Sf sooo | / | 7 \ wy, 3000 Z / : ' 2000 a Ae 2000 / a lana AN 20; ââ_ 1000 7 47 ⢠0 . -30 . ââ . 0 | 1511.06342#58 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06342 | 60 | Figure 4: The Actor-Mimic training curves for the network trained solely with the policy regression objective (AMN-policy). The AMN-policy is trained for 16 epochs, or 4 million frames per game. We compare against the (smaller network) expert DQNs, which are trained until convergence. We also report the maximum test reward the expert DQN achieved over all training epochs, as well as the mean testing reward achieved over the last 10 epochs.
14
Published as a conference paper at ICLR 2016
10%
# atlantis
g assault 5000 10000 beam rider -âAMIN-feature 5 âpan 4000 }|âDQN-Max 8000 âDQN-Mean . 4 3000 2000 1000 0 100 50 0 -50 0 fishing derby 1000 enduro 20 ig derby 10000 kangaroo 800 8000 600 6000 400 4000 200 â30 2000 0 -100 ° 0 5 15 oO 5 10 15 0 30 peng seaquest 10000 7000 6000 8000 5000 6000 4000 4000 3000 2000 2000 1000 o - -30 o 0 5 10 15 0 5 10 15 0 space invaders 2500 2000 1500 1000 a /S 500 7 o | 1511.06342#60 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.06342 | 61 | Figure 5: The Actor-Mimic training curves for the network trained with both the feature and policy regression objective (AMN-feature). The AMN-feature is trained for 16 epochs, or 4 million frames per game. We compare against the (smaller network) expert DQNs, which are trained until convergence. We also report the maximum test reward the expert DQN achieved over all training epochs, as well as the mean testing reward achieved over the last 10 epochs.
15
Published as a conference paper at ICLR 2016
# APPENDIX E TABLE 1 BARPLOT
200 200 100 Relative Mean Score (100% x AM) Relative Max Seore (100% x AMN) âatlantis boxiug breakout ery enduro pongâseaquest space climber invaders boxing breakout crazy enduro pong seaquest space climber invaders
Figure 6: Plots showing relative mean reward improvement (left) and relative max reward improvement (right) of the multitask AMN over the expert DQNs. See Table 1 for details on how these values were calculated.
APPENDIX F TABLE 2 LEARNING CURVES
00r
002 | 1511.06342#61 | Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning | The ability to act in multiple environments and transfer previous knowledge
to new situations can be considered a critical aspect of any intelligent agent.
Towards this goal, we define a novel method of multitask and transfer learning
that enables an autonomous agent to learn how to behave in multiple tasks
simultaneously, and then generalize its knowledge to new domains. This method,
termed "Actor-Mimic", exploits the use of deep reinforcement learning and model
compression techniques to train a single policy network that learns how to act
in a set of distinct tasks by using the guidance of several expert teachers. We
then show that the representations learnt by the deep policy network are
capable of generalizing to new tasks with no prior expert guidance, speeding up
learning in novel environments. Although our method can in general be applied
to a wide range of problems, we use Atari games as a testing environment to
demonstrate these methods. | http://arxiv.org/pdf/1511.06342 | Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov | cs.LG | Accepted as a conference paper at ICLR 2016 | null | cs.LG | 20151119 | 20160222 | [
{
"id": "1503.02531"
}
] |
1511.05756 | 1 | # Abstract
We tackle image question answering (ImageQA) prob- lem by learning a convolutional neural network (CNN) with a dynamic parameter layer whose weights are determined adaptively based on questions. For the adaptive parameter prediction, we employ a separate parameter prediction net- work, which consists of gated recurrent unit (GRU) taking a question as its input and a fully-connected layer gener- ating a set of candidate weights as its output. However, it is challenging to construct a parameter prediction network for a large number of parameters in the fully-connected dy- namic parameter layer of the CNN. We reduce the complex- ity of this problem by incorporating a hashing technique, where the candidate weights given by the parameter pre- diction network are selected using a predeï¬ned hash func- tion to determine individual weights in the dynamic param- eter layer. The proposed networkâjoint network with the CNN for ImageQA and the parameter prediction networkâ is trained end-to-end through back-propagation, where its weights are initialized using a pre-trained CNN and GRU. The proposed algorithm illustrates the state-of-the-art per- formance on all available public ImageQA benchmarks.
# 1. Introduction | 1511.05756#1 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | We tackle image question answering (ImageQA) problem by learning a
convolutional neural network (CNN) with a dynamic parameter layer whose weights
are determined adaptively based on questions. For the adaptive parameter
prediction, we employ a separate parameter prediction network, which consists
of gated recurrent unit (GRU) taking a question as its input and a
fully-connected layer generating a set of candidate weights as its output.
However, it is challenging to construct a parameter prediction network for a
large number of parameters in the fully-connected dynamic parameter layer of
the CNN. We reduce the complexity of this problem by incorporating a hashing
technique, where the candidate weights given by the parameter prediction
network are selected using a predefined hash function to determine individual
weights in the dynamic parameter layer. The proposed network---joint network
with the CNN for ImageQA and the parameter prediction network---is trained
end-to-end through back-propagation, where its weights are initialized using a
pre-trained CNN and GRU. The proposed algorithm illustrates the
state-of-the-art performance on all available public ImageQA benchmarks. | http://arxiv.org/pdf/1511.05756 | Hyeonwoo Noh, Paul Hongsuck Seo, Bohyung Han | cs.CV, cs.CL, cs.LG | null | null | cs.CV | 20151118 | 20151118 | [
{
"id": "1506.00333"
}
] |
1511.05756 | 2 | # 1. Introduction
One of the ultimate goals in computer vision is holistic scene understanding [30], which requires a system to cap- ture various kinds of information such as objects, actions, events, scene, atmosphere, and their relations in many dif- ferent levels of semantics. Although signiï¬cant progress on various recognition tasks [5, 8, 21, 24, 26, 27, 31] has been made in recent years, these works focus only on solv- ing relatively simple recognition problems in controlled set- tings, where each dataset consists of concepts with similar level of understanding (e.g. object, scene, bird species, face identity, action, texture etc.). There has been less efforts made on solving various recognition problems simultane- ously, which is more complex and realistic, even though this is a crucial step toward holistic scene understanding.
Q: What type of animal is this? Q:Is this animal alone?
Q Is it snowing? Q: Is this picture taken during the day?
Q: What kind of oranges are these? Q Is the fruit sliced?
Q: What is leaning on the wall? Q: How many boards are there?
Figure 1. Sample images and questions in VQA dataset [1]. Each question requires different type and/or level of understanding of the corresponding input image to ï¬nd correct answers. | 1511.05756#2 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | We tackle image question answering (ImageQA) problem by learning a
convolutional neural network (CNN) with a dynamic parameter layer whose weights
are determined adaptively based on questions. For the adaptive parameter
prediction, we employ a separate parameter prediction network, which consists
of gated recurrent unit (GRU) taking a question as its input and a
fully-connected layer generating a set of candidate weights as its output.
However, it is challenging to construct a parameter prediction network for a
large number of parameters in the fully-connected dynamic parameter layer of
the CNN. We reduce the complexity of this problem by incorporating a hashing
technique, where the candidate weights given by the parameter prediction
network are selected using a predefined hash function to determine individual
weights in the dynamic parameter layer. The proposed network---joint network
with the CNN for ImageQA and the parameter prediction network---is trained
end-to-end through back-propagation, where its weights are initialized using a
pre-trained CNN and GRU. The proposed algorithm illustrates the
state-of-the-art performance on all available public ImageQA benchmarks. | http://arxiv.org/pdf/1511.05756 | Hyeonwoo Noh, Paul Hongsuck Seo, Bohyung Han | cs.CV, cs.CL, cs.LG | null | null | cs.CV | 20151118 | 20151118 | [
{
"id": "1506.00333"
}
] |
1511.05756 | 3 | Figure 1. Sample images and questions in VQA dataset [1]. Each question requires different type and/or level of understanding of the corresponding input image to ï¬nd correct answers.
Image question answering (ImageQA) [1, 17, 23] aims to solve the holistic scene understanding problem by propos- ing a task unifying various recognition problems. ImageQA is a task automatically answering the questions about an in- put image as illustrated in Figure 1. The critical challenge of this problem is that different questions require different types and levels of understanding of an image to ï¬nd correct answers. For example, to answer the question like âhow is the weather?â we need to perform classiï¬cation on multiple choices related to weather, while we should decide between yes and no for the question like âis this picture taken dur- ing the day?â For this reason, not only the performance on a single recognition task but also the capability to select a proper task is important to solve ImageQA problem.
ImageQA problem has a short history in computer vi- sion and machine learning community, but there already ex- ist several approaches [10, 16, 17, 18, 23]. Among these methods, simple deep learning based approaches that per- form classiï¬cation on a combination of features extracted from image and question currently demonstrate the state-of1 | 1511.05756#3 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | We tackle image question answering (ImageQA) problem by learning a
convolutional neural network (CNN) with a dynamic parameter layer whose weights
are determined adaptively based on questions. For the adaptive parameter
prediction, we employ a separate parameter prediction network, which consists
of gated recurrent unit (GRU) taking a question as its input and a
fully-connected layer generating a set of candidate weights as its output.
However, it is challenging to construct a parameter prediction network for a
large number of parameters in the fully-connected dynamic parameter layer of
the CNN. We reduce the complexity of this problem by incorporating a hashing
technique, where the candidate weights given by the parameter prediction
network are selected using a predefined hash function to determine individual
weights in the dynamic parameter layer. The proposed network---joint network
with the CNN for ImageQA and the parameter prediction network---is trained
end-to-end through back-propagation, where its weights are initialized using a
pre-trained CNN and GRU. The proposed algorithm illustrates the
state-of-the-art performance on all available public ImageQA benchmarks. | http://arxiv.org/pdf/1511.05756 | Hyeonwoo Noh, Paul Hongsuck Seo, Bohyung Han | cs.CV, cs.CL, cs.LG | null | null | cs.CV | 20151118 | 20151118 | [
{
"id": "1506.00333"
}
] |
1511.05756 | 4 | the-art accuracy on public benchmarks [23, 16]; these ap- proaches extract image features using a convolutional neu- ral network (CNN), and use CNN or bag-of-words to obtain feature descriptors from question. They can be interpreted as a method that the answer is given by the co-occurrence of a particular combination of features extracted from an image and a question.
Contrary to the existing approaches, we deï¬ne a differ- ent recognition task depending on a question. To realize this idea, we propose a deep CNN with a dynamic param- eter layer whose weights are determined adaptively based on questions. We claim that a single deep CNN architecture can take care of various tasks by allowing adaptive weight assignment in the dynamic parameter layer. For the adap- tive parameter prediction, we employ a parameter predic- tion network, which consists of gated recurrent units (GRU) taking a question as its input and a fully-connected layer generating a set of candidate weights for the dynamic pa- rameter layer. The entire network including the CNN for ImageQA and the parameter prediction network is trained end-to-end through back-propagation, where its weights are initialized using pre-trained CNN and GRU. Our main con- tributions in this work are summarized below: | 1511.05756#4 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | We tackle image question answering (ImageQA) problem by learning a
convolutional neural network (CNN) with a dynamic parameter layer whose weights
are determined adaptively based on questions. For the adaptive parameter
prediction, we employ a separate parameter prediction network, which consists
of gated recurrent unit (GRU) taking a question as its input and a
fully-connected layer generating a set of candidate weights as its output.
However, it is challenging to construct a parameter prediction network for a
large number of parameters in the fully-connected dynamic parameter layer of
the CNN. We reduce the complexity of this problem by incorporating a hashing
technique, where the candidate weights given by the parameter prediction
network are selected using a predefined hash function to determine individual
weights in the dynamic parameter layer. The proposed network---joint network
with the CNN for ImageQA and the parameter prediction network---is trained
end-to-end through back-propagation, where its weights are initialized using a
pre-trained CNN and GRU. The proposed algorithm illustrates the
state-of-the-art performance on all available public ImageQA benchmarks. | http://arxiv.org/pdf/1511.05756 | Hyeonwoo Noh, Paul Hongsuck Seo, Bohyung Han | cs.CV, cs.CL, cs.LG | null | null | cs.CV | 20151118 | 20151118 | [
{
"id": "1506.00333"
}
] |
1511.05756 | 5 | ⢠We successfully adopt a deep CNN with a dynamic pa- rameter layer for ImageQA, which is a fully-connected layer whose parameters are determined dynamically based on a given question.
⢠To predict a large number of weights in the dynamic parameter layer effectively and efï¬ciently, we apply hashing trick [3], which reduces the number of param- eters signiï¬cantly with little impact on network capac- ity.
⢠We ï¬ne-tune GRU pre-trained on a large-scale text cor- pus [14] to improve generalization performance of our network. Pre-training GRU on a large corpus is natural way to deal with a small number of training data, but no one has attempted it yet to our knowledge.
⢠This is the ï¬rst work to report the results on all cur- rently available benchmark datasets such as DAQUAR, COCO-QA and VQA. Our algorithm achieves the state-of-the-art performance on all the three datasets.
The rest of this paper is organized as follows. We ï¬rst review related work in Section 2. Section 3 and 4 describe the overview of our algorithm and the architecture of our network, respectively. We discuss the detailed procedure to train the proposed network in Section 5. Experimental results are demonstrated in Section 6.
# 2. Related Work | 1511.05756#5 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | We tackle image question answering (ImageQA) problem by learning a
convolutional neural network (CNN) with a dynamic parameter layer whose weights
are determined adaptively based on questions. For the adaptive parameter
prediction, we employ a separate parameter prediction network, which consists
of gated recurrent unit (GRU) taking a question as its input and a
fully-connected layer generating a set of candidate weights as its output.
However, it is challenging to construct a parameter prediction network for a
large number of parameters in the fully-connected dynamic parameter layer of
the CNN. We reduce the complexity of this problem by incorporating a hashing
technique, where the candidate weights given by the parameter prediction
network are selected using a predefined hash function to determine individual
weights in the dynamic parameter layer. The proposed network---joint network
with the CNN for ImageQA and the parameter prediction network---is trained
end-to-end through back-propagation, where its weights are initialized using a
pre-trained CNN and GRU. The proposed algorithm illustrates the
state-of-the-art performance on all available public ImageQA benchmarks. | http://arxiv.org/pdf/1511.05756 | Hyeonwoo Noh, Paul Hongsuck Seo, Bohyung Han | cs.CV, cs.CL, cs.LG | null | null | cs.CV | 20151118 | 20151118 | [
{
"id": "1506.00333"
}
] |
1511.05756 | 6 | # 2. Related Work
There are several recent papers to address ImageQA [1, 10, 16, 17, 18, 23]; the most of them are based on deep
learning except [17]. Malinowski and Fritz [17] propose a Bayesian framework, which exploits recent advances in computer vision and natural language processing. Specif- ically, it employs semantic image segmentation and sym- bolic question reasoning to solve ImageQA problem. How- ever, this method depends on a pre-deï¬ned set of predicates, which makes it difï¬cult to represent complex models re- quired to understand input images. | 1511.05756#6 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | We tackle image question answering (ImageQA) problem by learning a
convolutional neural network (CNN) with a dynamic parameter layer whose weights
are determined adaptively based on questions. For the adaptive parameter
prediction, we employ a separate parameter prediction network, which consists
of gated recurrent unit (GRU) taking a question as its input and a
fully-connected layer generating a set of candidate weights as its output.
However, it is challenging to construct a parameter prediction network for a
large number of parameters in the fully-connected dynamic parameter layer of
the CNN. We reduce the complexity of this problem by incorporating a hashing
technique, where the candidate weights given by the parameter prediction
network are selected using a predefined hash function to determine individual
weights in the dynamic parameter layer. The proposed network---joint network
with the CNN for ImageQA and the parameter prediction network---is trained
end-to-end through back-propagation, where its weights are initialized using a
pre-trained CNN and GRU. The proposed algorithm illustrates the
state-of-the-art performance on all available public ImageQA benchmarks. | http://arxiv.org/pdf/1511.05756 | Hyeonwoo Noh, Paul Hongsuck Seo, Bohyung Han | cs.CV, cs.CL, cs.LG | null | null | cs.CV | 20151118 | 20151118 | [
{
"id": "1506.00333"
}
] |
1511.05756 | 7 | Deep learning based approaches demonstrate competi- tive performances in ImageQA [18, 10, 23, 16, 1]. Most approaches based on deep learning commonly use CNNs to extract features from image while they use different strate- gies to handle question sentences. Some algorithms em- ploy embedding of joint features based on image and ques- tion [1, 10, 18]. However, learning a softmax classiï¬er on the simple joint featuresâconcatenation of CNN-based image features and continuous bag-of-words representation of a questionâperforms better than LSTM-based embed- ding on COCO-QA [23] dataset. Another line of research is to utilize CNNs for feature extraction from both image and question and combine the two features [16]; this ap- proach demonstrates impressive performance enhancement on DAQUAR [17] dataset by allowing ï¬ne-tuning the whole parameters. | 1511.05756#7 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | We tackle image question answering (ImageQA) problem by learning a
convolutional neural network (CNN) with a dynamic parameter layer whose weights
are determined adaptively based on questions. For the adaptive parameter
prediction, we employ a separate parameter prediction network, which consists
of gated recurrent unit (GRU) taking a question as its input and a
fully-connected layer generating a set of candidate weights as its output.
However, it is challenging to construct a parameter prediction network for a
large number of parameters in the fully-connected dynamic parameter layer of
the CNN. We reduce the complexity of this problem by incorporating a hashing
technique, where the candidate weights given by the parameter prediction
network are selected using a predefined hash function to determine individual
weights in the dynamic parameter layer. The proposed network---joint network
with the CNN for ImageQA and the parameter prediction network---is trained
end-to-end through back-propagation, where its weights are initialized using a
pre-trained CNN and GRU. The proposed algorithm illustrates the
state-of-the-art performance on all available public ImageQA benchmarks. | http://arxiv.org/pdf/1511.05756 | Hyeonwoo Noh, Paul Hongsuck Seo, Bohyung Han | cs.CV, cs.CL, cs.LG | null | null | cs.CV | 20151118 | 20151118 | [
{
"id": "1506.00333"
}
] |
1511.05756 | 8 | The prediction of the weight parameters in deep neural networks has been explored in [2] in the context of zero- shot learning. To perform classiï¬cation of unseen classes, it trains a multi-layer perceptron to predict a binary clas- siï¬er for class-speciï¬c description in text. However, this method is not directly applicable to ImageQA since ï¬nding solutions based on the combination of question and answer is a more complex problem than the one discussed in [2], and ImageQA involves a signiï¬cantly larger set of candidate answers, which requires much more parameters than the bi- nary classiï¬cation case. Recently, a parameter reduction technique based on a hashing trick is proposed by Chen et al. [3] to ï¬t a large neural network in a limited memory budget. However, applying this technique to the dynamic prediction of parameters in deep neural networks is not at- tempted yet to our knowledge.
# 3. Algorithm Overview
We brieï¬y describe the motivation and formulation of our approach in this section.
# 3.1. Motivation | 1511.05756#8 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | We tackle image question answering (ImageQA) problem by learning a
convolutional neural network (CNN) with a dynamic parameter layer whose weights
are determined adaptively based on questions. For the adaptive parameter
prediction, we employ a separate parameter prediction network, which consists
of gated recurrent unit (GRU) taking a question as its input and a
fully-connected layer generating a set of candidate weights as its output.
However, it is challenging to construct a parameter prediction network for a
large number of parameters in the fully-connected dynamic parameter layer of
the CNN. We reduce the complexity of this problem by incorporating a hashing
technique, where the candidate weights given by the parameter prediction
network are selected using a predefined hash function to determine individual
weights in the dynamic parameter layer. The proposed network---joint network
with the CNN for ImageQA and the parameter prediction network---is trained
end-to-end through back-propagation, where its weights are initialized using a
pre-trained CNN and GRU. The proposed algorithm illustrates the
state-of-the-art performance on all available public ImageQA benchmarks. | http://arxiv.org/pdf/1511.05756 | Hyeonwoo Noh, Paul Hongsuck Seo, Bohyung Han | cs.CV, cs.CL, cs.LG | null | null | cs.CV | 20151118 | 20151118 | [
{
"id": "1506.00333"
}
] |
1511.05756 | 9 | # 3. Algorithm Overview
We brieï¬y describe the motivation and formulation of our approach in this section.
# 3.1. Motivation
Although ImageQA requires different types and levels of image understanding, existing approaches [1, 10, 18] pose the problem as a ï¬at classiï¬cation task. However, we be- lieve that it is difï¬cult to solve ImageQA using a single deep neural network with ï¬xed parameters. In many CNN-based recognition problems, it is well-known to ï¬ne-tune a few layers for the adaptation to new tasks. In addition, some
eee Network > Dynamic Parameter Layer ae 2 ! Parameter Prediction Network = !| GRU P| GRU }} GRU >| GRU >| GRU GRU P| oT fT) OT TT oe ) gj e || What is in the cabinet ? 3
Figure 2. Overall architecture of the proposed Dynamic Parameter Prediction network (DPPnet), which is composed of the classiï¬cation network and the parameter prediction network. The weights in the dynamic parameter layer are mapped by a hashing trick from the candidate weights obtained from the parameter prediction network. | 1511.05756#9 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | We tackle image question answering (ImageQA) problem by learning a
convolutional neural network (CNN) with a dynamic parameter layer whose weights
are determined adaptively based on questions. For the adaptive parameter
prediction, we employ a separate parameter prediction network, which consists
of gated recurrent unit (GRU) taking a question as its input and a
fully-connected layer generating a set of candidate weights as its output.
However, it is challenging to construct a parameter prediction network for a
large number of parameters in the fully-connected dynamic parameter layer of
the CNN. We reduce the complexity of this problem by incorporating a hashing
technique, where the candidate weights given by the parameter prediction
network are selected using a predefined hash function to determine individual
weights in the dynamic parameter layer. The proposed network---joint network
with the CNN for ImageQA and the parameter prediction network---is trained
end-to-end through back-propagation, where its weights are initialized using a
pre-trained CNN and GRU. The proposed algorithm illustrates the
state-of-the-art performance on all available public ImageQA benchmarks. | http://arxiv.org/pdf/1511.05756 | Hyeonwoo Noh, Paul Hongsuck Seo, Bohyung Han | cs.CV, cs.CL, cs.LG | null | null | cs.CV | 20151118 | 20151118 | [
{
"id": "1506.00333"
}
] |
1511.05756 | 10 | networks are designed to solve two or more tasks jointly by constructing multiple branches connected to a common CNN architecture. In this work, we hope to solve the het- erogeneous recognition tasks using a single CNN by adapt- ing the weights in the dynamic parameter layer. Since the task is deï¬ned by the question in ImageQA, the weights in the layer are determined depending on the question sen- tence. In addition, a hashing trick is employed to predict a large number of weights in the dynamic parameter layer and avoid parameter explosion.
# 3.2. Problem Formulation
ImageQA systems predict the best answer Ëa given an im- age I and a question q. Conventional approaches [16, 23] typically construct a joint feature vector based on two inputs I and q and solve a classiï¬cation problem for ImageQA us- ing the following equation: | 1511.05756#10 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | We tackle image question answering (ImageQA) problem by learning a
convolutional neural network (CNN) with a dynamic parameter layer whose weights
are determined adaptively based on questions. For the adaptive parameter
prediction, we employ a separate parameter prediction network, which consists
of gated recurrent unit (GRU) taking a question as its input and a
fully-connected layer generating a set of candidate weights as its output.
However, it is challenging to construct a parameter prediction network for a
large number of parameters in the fully-connected dynamic parameter layer of
the CNN. We reduce the complexity of this problem by incorporating a hashing
technique, where the candidate weights given by the parameter prediction
network are selected using a predefined hash function to determine individual
weights in the dynamic parameter layer. The proposed network---joint network
with the CNN for ImageQA and the parameter prediction network---is trained
end-to-end through back-propagation, where its weights are initialized using a
pre-trained CNN and GRU. The proposed algorithm illustrates the
state-of-the-art performance on all available public ImageQA benchmarks. | http://arxiv.org/pdf/1511.05756 | Hyeonwoo Noh, Paul Hongsuck Seo, Bohyung Han | cs.CV, cs.CL, cs.LG | null | null | cs.CV | 20151118 | 20151118 | [
{
"id": "1506.00333"
}
] |
1511.05756 | 11 | network. The classiï¬cation network is a CNN. One of the fully-connected layers in the CNN is the dynamic parame- ter layer, and the weights in the layer are determined adap- tively by the parameter prediction network. The parame- ter prediction network has GRU cells and a fully-connected layer. It takes a question as its input, and generates a real- valued vector, which corresponds to candidate weights for the dynamic parameter layer in the classiï¬cation network. Given an image and a question, our algorithm estimates the weights in the dynamic parameter layer through hash- ing with the candidate weights obtained from the parameter prediction network. Then, it feeds the input image to the classiï¬cation network to obtain the ï¬nal answer. More de- tails of the proposed network are discussed in the following subsections.
# 4.1. Classiï¬cation Network
Ëa = argmax p(a|I, q; θ) aâ⦠(1)
where ⦠is a set of all possible answers and θ is a vector for the parameters in the network. On the contrary, we use the question to predict weights in the classiï¬er and solve the problem. We ï¬nd the solution by | 1511.05756#11 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | We tackle image question answering (ImageQA) problem by learning a
convolutional neural network (CNN) with a dynamic parameter layer whose weights
are determined adaptively based on questions. For the adaptive parameter
prediction, we employ a separate parameter prediction network, which consists
of gated recurrent unit (GRU) taking a question as its input and a
fully-connected layer generating a set of candidate weights as its output.
However, it is challenging to construct a parameter prediction network for a
large number of parameters in the fully-connected dynamic parameter layer of
the CNN. We reduce the complexity of this problem by incorporating a hashing
technique, where the candidate weights given by the parameter prediction
network are selected using a predefined hash function to determine individual
weights in the dynamic parameter layer. The proposed network---joint network
with the CNN for ImageQA and the parameter prediction network---is trained
end-to-end through back-propagation, where its weights are initialized using a
pre-trained CNN and GRU. The proposed algorithm illustrates the
state-of-the-art performance on all available public ImageQA benchmarks. | http://arxiv.org/pdf/1511.05756 | Hyeonwoo Noh, Paul Hongsuck Seo, Bohyung Han | cs.CV, cs.CL, cs.LG | null | null | cs.CV | 20151118 | 20151118 | [
{
"id": "1506.00333"
}
] |
1511.05756 | 12 | Ëa = argmax p(a|I; θs, θd(q)) aâ⦠(2)
where θs and θd(q) denote static and dynamic parameters, respectively. Note that the values of θd(q) are determined by the question q.
# 4. Network Architecture
Figure 2 illustrates the overall architecture of the pro- posed algorithm. The network is composed of two sub- networks: classiï¬cation network and parameter prediction
The classiï¬cation network is constructed based on VGG 16-layer net [24], which is pre-trained on ImageNet [6]. We remove the last layer in the network and attach three fully- connected layers. The second last fully-connected layer of the network is the dynamic parameter layer whose weights are determined by the parameter prediction network, and the last fully-connected layer is the classiï¬cation layer whose output dimensionality is equal to the number of possible answers. The probability for each answer is computed by applying a softmax function to the output vector of the ï¬nal layer. | 1511.05756#12 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | We tackle image question answering (ImageQA) problem by learning a
convolutional neural network (CNN) with a dynamic parameter layer whose weights
are determined adaptively based on questions. For the adaptive parameter
prediction, we employ a separate parameter prediction network, which consists
of gated recurrent unit (GRU) taking a question as its input and a
fully-connected layer generating a set of candidate weights as its output.
However, it is challenging to construct a parameter prediction network for a
large number of parameters in the fully-connected dynamic parameter layer of
the CNN. We reduce the complexity of this problem by incorporating a hashing
technique, where the candidate weights given by the parameter prediction
network are selected using a predefined hash function to determine individual
weights in the dynamic parameter layer. The proposed network---joint network
with the CNN for ImageQA and the parameter prediction network---is trained
end-to-end through back-propagation, where its weights are initialized using a
pre-trained CNN and GRU. The proposed algorithm illustrates the
state-of-the-art performance on all available public ImageQA benchmarks. | http://arxiv.org/pdf/1511.05756 | Hyeonwoo Noh, Paul Hongsuck Seo, Bohyung Han | cs.CV, cs.CL, cs.LG | null | null | cs.CV | 20151118 | 20151118 | [
{
"id": "1506.00333"
}
] |
1511.05756 | 13 | We put the dynamic parameter layer in the second last fully-connected layer instead of the classiï¬cation layer be- cause it involves the smallest number of parameters. As the number of parameters in the classiï¬cation layer increases in proportion to the number of possible answers, predicting the weights for the classiï¬cation layer may not be a good option to general ImageQA problems in terms of scalability. Our choice for the dynamic parameter layer can be inter- preted as follows. By ï¬xing the classiï¬cation layer while adapting the immediately preceding layer, we obtain the task-independent semantic embedding of all possible an- swers and use the representation of an input embedded in the answer space to solve an ImageQA problem. Therefore, the relationships of the answers globally learned from all recognition tasks can help solve new ones involving unseen classes, especially in multiple choice questions. For exam- ple, when not the exact ground-truth word (e.g., kitten) but similar words (e.g., cat and kitty) are shown at training time, the network can still predict the close answers (e.g., kit- ten) based on the globally learned answer embedding. Even though we could also exploit the beneï¬t of answer embed- ding based on the relations among answers to deï¬ne a loss function, we leave it as our future work. | 1511.05756#13 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | We tackle image question answering (ImageQA) problem by learning a
convolutional neural network (CNN) with a dynamic parameter layer whose weights
are determined adaptively based on questions. For the adaptive parameter
prediction, we employ a separate parameter prediction network, which consists
of gated recurrent unit (GRU) taking a question as its input and a
fully-connected layer generating a set of candidate weights as its output.
However, it is challenging to construct a parameter prediction network for a
large number of parameters in the fully-connected dynamic parameter layer of
the CNN. We reduce the complexity of this problem by incorporating a hashing
technique, where the candidate weights given by the parameter prediction
network are selected using a predefined hash function to determine individual
weights in the dynamic parameter layer. The proposed network---joint network
with the CNN for ImageQA and the parameter prediction network---is trained
end-to-end through back-propagation, where its weights are initialized using a
pre-trained CNN and GRU. The proposed algorithm illustrates the
state-of-the-art performance on all available public ImageQA benchmarks. | http://arxiv.org/pdf/1511.05756 | Hyeonwoo Noh, Paul Hongsuck Seo, Bohyung Han | cs.CV, cs.CL, cs.LG | null | null | cs.CV | 20151118 | 20151118 | [
{
"id": "1506.00333"
}
] |
1511.05756 | 14 | # 4.2. Parameter Prediction Network
As mentioned earlier, our classification network has a dynamic parameter layer. That is, for an input vector of the dynamic parameter layer f* = [f/,..., f4]â, its output vector denoted by f° = [f?,..., f2]â is given by
f o = Wd(q)f i + b (3)
where b denotes a bias and Wd(q) â RM ÃN denotes the matrix constructed dynamically using the parameter predic- tion network given the input question. In other words, the weight matrix corresponding to the layer is parametrized by a function of the input question q.
The parameter prediction network is composed of GRU cells [4] followed by a fully-connected layer, which pro- duces the candidate weights to be used for the construction of weight matrix in the dynamic parameter layer within the classiï¬cation network. GRU, which is similar to LSTM, is designed to model dependency in multiple time scales. As illustrated in Figure 3, such dependency is captured by adaptively updating its hidden states with gate units. How- ever, contrary to LSTM, which maintains a separate mem- ory cell explicitly, GRU directly updates its hidden states with a reset gate and an update gate. The detailed proce- dure of the update is described below. | 1511.05756#14 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | We tackle image question answering (ImageQA) problem by learning a
convolutional neural network (CNN) with a dynamic parameter layer whose weights
are determined adaptively based on questions. For the adaptive parameter
prediction, we employ a separate parameter prediction network, which consists
of gated recurrent unit (GRU) taking a question as its input and a
fully-connected layer generating a set of candidate weights as its output.
However, it is challenging to construct a parameter prediction network for a
large number of parameters in the fully-connected dynamic parameter layer of
the CNN. We reduce the complexity of this problem by incorporating a hashing
technique, where the candidate weights given by the parameter prediction
network are selected using a predefined hash function to determine individual
weights in the dynamic parameter layer. The proposed network---joint network
with the CNN for ImageQA and the parameter prediction network---is trained
end-to-end through back-propagation, where its weights are initialized using a
pre-trained CNN and GRU. The proposed algorithm illustrates the
state-of-the-art performance on all available public ImageQA benchmarks. | http://arxiv.org/pdf/1511.05756 | Hyeonwoo Noh, Paul Hongsuck Seo, Bohyung Han | cs.CV, cs.CL, cs.LG | null | null | cs.CV | 20151118 | 20151118 | [
{
"id": "1506.00333"
}
] |
1511.05756 | 15 | Let w1, ..., wT be the words in a question q, where T is the number of words in the question. In each time step t, given the embedded vector xt for a word wt, the GRU encoder updates its hidden state at time t, denoted by ht, using the following equations:
(4)
ry, = 0(W,x;, + U,hy_1) a = o(W-x, + Uzhy-1) h, = tanh(W;,x; + Un(ri © hi-1)) hy = (1 â 2) © hy-1 + Zt © by
(6)
(6)
(7)
f Input âOutput, | input Gate Gate (@)) | | | Modulation | Forget Gate Candidate Activation GRU LSTM
Figure 3. Comparison of GRU and LSTM. Contrary to LSTM that contains memory cell explicitly, GRU updates the hidden state di- rectly. | 1511.05756#15 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | We tackle image question answering (ImageQA) problem by learning a
convolutional neural network (CNN) with a dynamic parameter layer whose weights
are determined adaptively based on questions. For the adaptive parameter
prediction, we employ a separate parameter prediction network, which consists
of gated recurrent unit (GRU) taking a question as its input and a
fully-connected layer generating a set of candidate weights as its output.
However, it is challenging to construct a parameter prediction network for a
large number of parameters in the fully-connected dynamic parameter layer of
the CNN. We reduce the complexity of this problem by incorporating a hashing
technique, where the candidate weights given by the parameter prediction
network are selected using a predefined hash function to determine individual
weights in the dynamic parameter layer. The proposed network---joint network
with the CNN for ImageQA and the parameter prediction network---is trained
end-to-end through back-propagation, where its weights are initialized using a
pre-trained CNN and GRU. The proposed algorithm illustrates the
state-of-the-art performance on all available public ImageQA benchmarks. | http://arxiv.org/pdf/1511.05756 | Hyeonwoo Noh, Paul Hongsuck Seo, Bohyung Han | cs.CV, cs.CL, cs.LG | null | null | cs.CV | 20151118 | 20151118 | [
{
"id": "1506.00333"
}
] |
1511.05756 | 16 | Figure 3. Comparison of GRU and LSTM. Contrary to LSTM that contains memory cell explicitly, GRU updates the hidden state di- rectly.
where r; and z, respectively denote the reset and update gates at time t, and h, is candidate activation at time ¢. In addition, © indicates element-wise multiplication operator and o(-) is a sigmoid function. Note that the coefficient matrices related to GRU such as W,., W., Wp, U,, Uz, and U), are learned by our training algorithm. By applying this encoder to a question sentence through a series of GRU cells, we obtain the final embedding vector h, ⬠R* of the question sentence.
Once the question embedding is obtained by GRU, the candidate weight vector, p = [p1, . . . , pK]T, is given by applying a fully-connected layer to the embedded question hT as | 1511.05756#16 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | We tackle image question answering (ImageQA) problem by learning a
convolutional neural network (CNN) with a dynamic parameter layer whose weights
are determined adaptively based on questions. For the adaptive parameter
prediction, we employ a separate parameter prediction network, which consists
of gated recurrent unit (GRU) taking a question as its input and a
fully-connected layer generating a set of candidate weights as its output.
However, it is challenging to construct a parameter prediction network for a
large number of parameters in the fully-connected dynamic parameter layer of
the CNN. We reduce the complexity of this problem by incorporating a hashing
technique, where the candidate weights given by the parameter prediction
network are selected using a predefined hash function to determine individual
weights in the dynamic parameter layer. The proposed network---joint network
with the CNN for ImageQA and the parameter prediction network---is trained
end-to-end through back-propagation, where its weights are initialized using a
pre-trained CNN and GRU. The proposed algorithm illustrates the
state-of-the-art performance on all available public ImageQA benchmarks. | http://arxiv.org/pdf/1511.05756 | Hyeonwoo Noh, Paul Hongsuck Seo, Bohyung Han | cs.CV, cs.CL, cs.LG | null | null | cs.CV | 20151118 | 20151118 | [
{
"id": "1506.00333"
}
] |
1511.05756 | 17 | p = WphT where p â RK is the output of the parameter prediction net- work, and Wp is the weight matrix of the fully-connected layer in the parameter prediction network. Note that even though we employ GRU for a parameter prediction network since the pre-trained network for sentence embeddingâ skip-thought vector model [14]âis based on GRU, any form of neural networks, e.g., fully-connected and convo- lutional neural network, can be used to construct the pa- rameter prediction network.
# 4.3. Parameter Hashing
The weights in the dynamic parameter layers are deter- mined based on the learned model in the parameter predic- tion network given a question. The most straightforward approach to obtain the weights is to generate the whole ma- trix Wd(q) using the parameter prediction network. How- ever, the size of the matrix is very large, and the network may be overï¬tted easily given the limited number of train- ing examples. In addition, since we need quadratically more parameters between GRU and the fully-connected layer in the parameter prediction network to increase the dimension- ality of its output, it is not desirable to predict full weight matrix using the network. Therefore, it is preferable to con- struct Wd(q) based on a small number of candidate weights using a hashing trick. | 1511.05756#17 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | We tackle image question answering (ImageQA) problem by learning a
convolutional neural network (CNN) with a dynamic parameter layer whose weights
are determined adaptively based on questions. For the adaptive parameter
prediction, we employ a separate parameter prediction network, which consists
of gated recurrent unit (GRU) taking a question as its input and a
fully-connected layer generating a set of candidate weights as its output.
However, it is challenging to construct a parameter prediction network for a
large number of parameters in the fully-connected dynamic parameter layer of
the CNN. We reduce the complexity of this problem by incorporating a hashing
technique, where the candidate weights given by the parameter prediction
network are selected using a predefined hash function to determine individual
weights in the dynamic parameter layer. The proposed network---joint network
with the CNN for ImageQA and the parameter prediction network---is trained
end-to-end through back-propagation, where its weights are initialized using a
pre-trained CNN and GRU. The proposed algorithm illustrates the
state-of-the-art performance on all available public ImageQA benchmarks. | http://arxiv.org/pdf/1511.05756 | Hyeonwoo Noh, Paul Hongsuck Seo, Bohyung Han | cs.CV, cs.CL, cs.LG | null | null | cs.CV | 20151118 | 20151118 | [
{
"id": "1506.00333"
}
] |
1511.05756 | 18 | We employ the recently proposed random weight sharing technique based on hashing [3] to construct the weights in the dynamic parameter layer. Speciï¬cally, a single parameter in the candidate weight vector p is shared by multiple elements of Wd(q), which is done by applying a predeï¬ned hash function that converts the 2D location in Wd(q) to the 1D index in p. By this simple hashing trick, we can reduce the number of parameters in Wd(q) while maintaining the accuracy of the network [3].
mn be the element at (m, n) in Wd(q), which cor- responds to the weight between mth output and nth input neuron. Denote by Ï(m, n) a hash function mapping a key (m, n) to a natural number in {1, . . . , K}, where K is the dimensionality of p. The ï¬nal hash function is given by
mn = pÏ(m,n) · ξ(m, n) where ξ(m, n) : N à N â {+1, â1} is another hash func- tion independent of Ï(m, n). This function is useful to re- move the bias of hashed inner product [3]. In our imple- mentation of the hash function, we adopt an open-source implementation of xxHash1. | 1511.05756#18 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | We tackle image question answering (ImageQA) problem by learning a
convolutional neural network (CNN) with a dynamic parameter layer whose weights
are determined adaptively based on questions. For the adaptive parameter
prediction, we employ a separate parameter prediction network, which consists
of gated recurrent unit (GRU) taking a question as its input and a
fully-connected layer generating a set of candidate weights as its output.
However, it is challenging to construct a parameter prediction network for a
large number of parameters in the fully-connected dynamic parameter layer of
the CNN. We reduce the complexity of this problem by incorporating a hashing
technique, where the candidate weights given by the parameter prediction
network are selected using a predefined hash function to determine individual
weights in the dynamic parameter layer. The proposed network---joint network
with the CNN for ImageQA and the parameter prediction network---is trained
end-to-end through back-propagation, where its weights are initialized using a
pre-trained CNN and GRU. The proposed algorithm illustrates the
state-of-the-art performance on all available public ImageQA benchmarks. | http://arxiv.org/pdf/1511.05756 | Hyeonwoo Noh, Paul Hongsuck Seo, Bohyung Han | cs.CV, cs.CL, cs.LG | null | null | cs.CV | 20151118 | 20151118 | [
{
"id": "1506.00333"
}
] |
1511.05756 | 19 | We believe that it is reasonable to reduce the number of free parameters based on the hashing technique as there are many redundant parameters in deep neural networks [7] and the network can be parametrized using a smaller set of can- didate weights. Instead of training a huge number of pa- rameters without any constraint, it would be advantageous practically to allow multiple elements in the weight matrix It is also demonstrated that the to share the same value. number of free parameter can be reduced substantially with little loss of network performance [3].
# 5. Training Algorithm
This section discusses the error back-propagation algo- rithm in the proposed network and introduces the tech- niques adopted to enhance performance of the network.
# 5.1. Training by Error Back-Propagation
The proposed network is trained end-to-end to minimize the error between the ground-truths and the estimated an- swers. The error is back-propagated by chain rule through both the classiï¬cation network and the parameter prediction network and they are jointly trained by a ï¬rst-order opti- mization method.
Let L denote the loss function. The partial derivatives of L with respect to the kth element in the input and output of the dynamic parameter layer are given respectively by | 1511.05756#19 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | We tackle image question answering (ImageQA) problem by learning a
convolutional neural network (CNN) with a dynamic parameter layer whose weights
are determined adaptively based on questions. For the adaptive parameter
prediction, we employ a separate parameter prediction network, which consists
of gated recurrent unit (GRU) taking a question as its input and a
fully-connected layer generating a set of candidate weights as its output.
However, it is challenging to construct a parameter prediction network for a
large number of parameters in the fully-connected dynamic parameter layer of
the CNN. We reduce the complexity of this problem by incorporating a hashing
technique, where the candidate weights given by the parameter prediction
network are selected using a predefined hash function to determine individual
weights in the dynamic parameter layer. The proposed network---joint network
with the CNN for ImageQA and the parameter prediction network---is trained
end-to-end through back-propagation, where its weights are initialized using a
pre-trained CNN and GRU. The proposed algorithm illustrates the
state-of-the-art performance on all available public ImageQA benchmarks. | http://arxiv.org/pdf/1511.05756 | Hyeonwoo Noh, Paul Hongsuck Seo, Bohyung Han | cs.CV, cs.CL, cs.LG | null | null | cs.CV | 20151118 | 20151118 | [
{
"id": "1506.00333"
}
] |
1511.05756 | 20 | Let L denote the loss function. The partial derivatives of L with respect to the kth element in the input and output of the dynamic parameter layer are given respectively by
δi k â¡ âL âf i k and δo k â¡ âL âf o k . (10)
The two derivatives have the following relation:
M 5 = So who, (11) m=1
# 1https://code.google.com/p/xxhash/
Likewise, the derivative with respect to the assigned weights in the dynamic parameter layer is given by
âL âwd mn = f i nδo m. (12)
As a single output value of the parameter prediction net- work is shared by multiple connections in the dynamic parameter layer, the derivatives with respect to all shared weights need to be accumulated to compute the derivative with respect to an element in the output of the parameter prediction network as follows:
OL âMo OL dwt mn Opr 2 y Ow!,,, OV: m=1n=1 M N OL = =ââ &(m, n)I[y(m,n) =k], (13) yy » Ow!
where I[·] denotes the indicator function. The gradients of all the preceding layers in the classiï¬cation and parame- ter prediction networks are computed by the standard back- propagation algorithm. | 1511.05756#20 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | We tackle image question answering (ImageQA) problem by learning a
convolutional neural network (CNN) with a dynamic parameter layer whose weights
are determined adaptively based on questions. For the adaptive parameter
prediction, we employ a separate parameter prediction network, which consists
of gated recurrent unit (GRU) taking a question as its input and a
fully-connected layer generating a set of candidate weights as its output.
However, it is challenging to construct a parameter prediction network for a
large number of parameters in the fully-connected dynamic parameter layer of
the CNN. We reduce the complexity of this problem by incorporating a hashing
technique, where the candidate weights given by the parameter prediction
network are selected using a predefined hash function to determine individual
weights in the dynamic parameter layer. The proposed network---joint network
with the CNN for ImageQA and the parameter prediction network---is trained
end-to-end through back-propagation, where its weights are initialized using a
pre-trained CNN and GRU. The proposed algorithm illustrates the
state-of-the-art performance on all available public ImageQA benchmarks. | http://arxiv.org/pdf/1511.05756 | Hyeonwoo Noh, Paul Hongsuck Seo, Bohyung Han | cs.CV, cs.CL, cs.LG | null | null | cs.CV | 20151118 | 20151118 | [
{
"id": "1506.00333"
}
] |
1511.05756 | 21 | # 5.2. Using Pre-trained GRU
Although encoders based on recurrent neural networks (RNNs) such as LSTM [11] and GRU [4] demonstrate im- pressive performance on sentence embedding [19, 25], their beneï¬ts in the ImageQA task are marginal in comparison to bag-of-words model [23]. One of the reasons for this fact is the lack of language data in ImageQA dataset. Contrary to the tasks that have large-scale training corpora, even the largest ImageQA dataset contains relatively small amount of language data; for example, [1] contains 750K questions in total. Note that the model in [25] is trained using a corpus with more than 12M sentences. | 1511.05756#21 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | We tackle image question answering (ImageQA) problem by learning a
convolutional neural network (CNN) with a dynamic parameter layer whose weights
are determined adaptively based on questions. For the adaptive parameter
prediction, we employ a separate parameter prediction network, which consists
of gated recurrent unit (GRU) taking a question as its input and a
fully-connected layer generating a set of candidate weights as its output.
However, it is challenging to construct a parameter prediction network for a
large number of parameters in the fully-connected dynamic parameter layer of
the CNN. We reduce the complexity of this problem by incorporating a hashing
technique, where the candidate weights given by the parameter prediction
network are selected using a predefined hash function to determine individual
weights in the dynamic parameter layer. The proposed network---joint network
with the CNN for ImageQA and the parameter prediction network---is trained
end-to-end through back-propagation, where its weights are initialized using a
pre-trained CNN and GRU. The proposed algorithm illustrates the
state-of-the-art performance on all available public ImageQA benchmarks. | http://arxiv.org/pdf/1511.05756 | Hyeonwoo Noh, Paul Hongsuck Seo, Bohyung Han | cs.CV, cs.CL, cs.LG | null | null | cs.CV | 20151118 | 20151118 | [
{
"id": "1506.00333"
}
] |
1511.05756 | 22 | To deal with the deï¬ciency of linguistic information in ImageQA problem, we transfer the information acquired from a large language corpus by ï¬ne-tuning the pre-trained embedding network. We initialize the GRU with the skip- thought vector model trained on a book-collection corpus containing more than 74M sentences [14]. Note that the GRU of the skip-thought vector model is trained in an un- supervised manner by predicting the surrounding sentences from the embedded sentences. As this task requires to un- derstand context, the pre-trained model produces a generic sentence embedding, which is difï¬cult to be trained with a limited number of training examples. By ï¬ne-tuning our GRU initialized with a generic sentence embedding model for ImageQA, we obtain the representations for questions that are generalized better.
# 5.3. Fine-tuning CNN
It is very common to transfer CNNs for new tasks in classiï¬cation problems, but it is not trivial to ï¬ne-tune the | 1511.05756#22 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | We tackle image question answering (ImageQA) problem by learning a
convolutional neural network (CNN) with a dynamic parameter layer whose weights
are determined adaptively based on questions. For the adaptive parameter
prediction, we employ a separate parameter prediction network, which consists
of gated recurrent unit (GRU) taking a question as its input and a
fully-connected layer generating a set of candidate weights as its output.
However, it is challenging to construct a parameter prediction network for a
large number of parameters in the fully-connected dynamic parameter layer of
the CNN. We reduce the complexity of this problem by incorporating a hashing
technique, where the candidate weights given by the parameter prediction
network are selected using a predefined hash function to determine individual
weights in the dynamic parameter layer. The proposed network---joint network
with the CNN for ImageQA and the parameter prediction network---is trained
end-to-end through back-propagation, where its weights are initialized using a
pre-trained CNN and GRU. The proposed algorithm illustrates the
state-of-the-art performance on all available public ImageQA benchmarks. | http://arxiv.org/pdf/1511.05756 | Hyeonwoo Noh, Paul Hongsuck Seo, Bohyung Han | cs.CV, cs.CL, cs.LG | null | null | cs.CV | 20151118 | 20151118 | [
{
"id": "1506.00333"
}
] |
1511.05756 | 23 | It is very common to transfer CNNs for new tasks in classiï¬cation problems, but it is not trivial to ï¬ne-tune the
CNN in our problem. We observe that the gradients below the dynamic parameter layer in the CNN are noisy since the weights are predicted by the parameter prediction net- work. Hence, a straightforward approach to ï¬ne-tune the CNN typically fails to improve performance, and we em- ploy a slightly different technique for CNN ï¬ne-tuning to sidestep the observed problem. We update the parameters of the network using new datasets except the part transferred from VGG 16-layer net at the beginning, and start to update the weights in the subnetwork if the validation accuracy is saturated.
# 5.4. Training Details
Before training, question sentences are normalized to lower cases and preprocessed by a simple tokenization tech- nique as in [29]. We normalize the answers to lower cases and regard a whole answer in a single or multiple words as a separate class.
The network is trained end-to-end by back-propagation. Adam [13] is used for optimization with initial learning rate 0.01. We clip the gradient to 0.1 to handle the gradient ex- plosion from the recurrent structure of GRU [22]. Training is terminated when there is no progress on validation accu- racy for 5 epochs. | 1511.05756#23 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | We tackle image question answering (ImageQA) problem by learning a
convolutional neural network (CNN) with a dynamic parameter layer whose weights
are determined adaptively based on questions. For the adaptive parameter
prediction, we employ a separate parameter prediction network, which consists
of gated recurrent unit (GRU) taking a question as its input and a
fully-connected layer generating a set of candidate weights as its output.
However, it is challenging to construct a parameter prediction network for a
large number of parameters in the fully-connected dynamic parameter layer of
the CNN. We reduce the complexity of this problem by incorporating a hashing
technique, where the candidate weights given by the parameter prediction
network are selected using a predefined hash function to determine individual
weights in the dynamic parameter layer. The proposed network---joint network
with the CNN for ImageQA and the parameter prediction network---is trained
end-to-end through back-propagation, where its weights are initialized using a
pre-trained CNN and GRU. The proposed algorithm illustrates the
state-of-the-art performance on all available public ImageQA benchmarks. | http://arxiv.org/pdf/1511.05756 | Hyeonwoo Noh, Paul Hongsuck Seo, Bohyung Han | cs.CV, cs.CL, cs.LG | null | null | cs.CV | 20151118 | 20151118 | [
{
"id": "1506.00333"
}
] |
1511.05756 | 24 | Optimizing the dynamic parameter layer is not straight- forward since the distribution of the outputs in the dynamic parameter layer is likely to change signiï¬cantly in each batch. Therefore, we apply batch-normalization [12] to the output activations of the layer to alleviate this problem. In addition, we observe that GRU tends to converge fast and overï¬t data easily if training continues without any restric- tion. We stop ï¬ne-tuning GRU when the network start to overï¬t and continue to train the other parts of the network; this strategy improves performance in practice.
# 6. Experiments
We now describe the details of our implementation and evaluate the proposed method in various aspects.
# 6.1. Datasets
We evaluate the proposed network on all public Im- ageQA benchmark datasets such as DAQUAR [17], COCO- QA [23] and VQA [1]. They collected question-answer pairs from existing image datasets and most of the answers are single words or short phrases. | 1511.05756#24 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | We tackle image question answering (ImageQA) problem by learning a
convolutional neural network (CNN) with a dynamic parameter layer whose weights
are determined adaptively based on questions. For the adaptive parameter
prediction, we employ a separate parameter prediction network, which consists
of gated recurrent unit (GRU) taking a question as its input and a
fully-connected layer generating a set of candidate weights as its output.
However, it is challenging to construct a parameter prediction network for a
large number of parameters in the fully-connected dynamic parameter layer of
the CNN. We reduce the complexity of this problem by incorporating a hashing
technique, where the candidate weights given by the parameter prediction
network are selected using a predefined hash function to determine individual
weights in the dynamic parameter layer. The proposed network---joint network
with the CNN for ImageQA and the parameter prediction network---is trained
end-to-end through back-propagation, where its weights are initialized using a
pre-trained CNN and GRU. The proposed algorithm illustrates the
state-of-the-art performance on all available public ImageQA benchmarks. | http://arxiv.org/pdf/1511.05756 | Hyeonwoo Noh, Paul Hongsuck Seo, Bohyung Han | cs.CV, cs.CL, cs.LG | null | null | cs.CV | 20151118 | 20151118 | [
{
"id": "1506.00333"
}
] |
1511.05756 | 25 | DAQUAR is based on NYUDv2 [20] dataset, which is originally designed for indoor segmentation using RGBD images. DAQUAR provides two benchmarks, which are distinguished by the number of classes and the amount of data; DAQUAR-all consists of 6,795 and 5,673 questions for training and testing respectively, and includes 894 cate- gories in answer. DAQUAR-reduced includes only 37 an- swer categories for 3,876 training and 297 testing questions.
Some questions in this dataset are associated with a set of multiple answers instead of a single one.
The questions in COCO-QA are automatically gener- ated from the image descriptions in MS COCO dataset [15] using the constituency parser with simple question-answer generation rules. The questions in this dataset are typi- cally long and explicitly classiï¬ed into 4 types depending on the generation rules: object questions, number questions, color questions and location questions. All answers are with one-words and there are 78,736 questions for training and 38,948 questions for testing. | 1511.05756#25 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | We tackle image question answering (ImageQA) problem by learning a
convolutional neural network (CNN) with a dynamic parameter layer whose weights
are determined adaptively based on questions. For the adaptive parameter
prediction, we employ a separate parameter prediction network, which consists
of gated recurrent unit (GRU) taking a question as its input and a
fully-connected layer generating a set of candidate weights as its output.
However, it is challenging to construct a parameter prediction network for a
large number of parameters in the fully-connected dynamic parameter layer of
the CNN. We reduce the complexity of this problem by incorporating a hashing
technique, where the candidate weights given by the parameter prediction
network are selected using a predefined hash function to determine individual
weights in the dynamic parameter layer. The proposed network---joint network
with the CNN for ImageQA and the parameter prediction network---is trained
end-to-end through back-propagation, where its weights are initialized using a
pre-trained CNN and GRU. The proposed algorithm illustrates the
state-of-the-art performance on all available public ImageQA benchmarks. | http://arxiv.org/pdf/1511.05756 | Hyeonwoo Noh, Paul Hongsuck Seo, Bohyung Han | cs.CV, cs.CL, cs.LG | null | null | cs.CV | 20151118 | 20151118 | [
{
"id": "1506.00333"
}
] |
1511.05756 | 26 | Similar to COCO-QA, VQA is also constructed on MS COCO [15] but each question is associated with multiple answers annotated by different people. This dataset con- tains the largest number of questions: 248,349 for train- ing, 121,512 for validation, and 244,302 for testing, where the testing data is splited into test-dev, test-standard, test- challenge and test-reserve as in [15]. Each question is pro- vided with 10 answers to take the consensus of annotators into account. About 90% of answers have single words and 98% of answers do not exceed three words.
# 6.2. Evaluation Metrics
DAQUAR and COCO-QA employ both classiï¬cation ac- curacy and its relaxed version based on word similarity, WUPS [17]. It uses thresholded Wu-Palmer similarity [28] based on WordNet [9] taxonomy to compute the similarity between words. For predicted answer set Ai and ground- truth answer set T i of the ith example, WUPS is given by
# WUPS =
1 N â i a ;t), é st), 4 ydon{ I max yi(a,t), [] max y(a, ) (14) i=l acAt teT? | 1511.05756#26 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | We tackle image question answering (ImageQA) problem by learning a
convolutional neural network (CNN) with a dynamic parameter layer whose weights
are determined adaptively based on questions. For the adaptive parameter
prediction, we employ a separate parameter prediction network, which consists
of gated recurrent unit (GRU) taking a question as its input and a
fully-connected layer generating a set of candidate weights as its output.
However, it is challenging to construct a parameter prediction network for a
large number of parameters in the fully-connected dynamic parameter layer of
the CNN. We reduce the complexity of this problem by incorporating a hashing
technique, where the candidate weights given by the parameter prediction
network are selected using a predefined hash function to determine individual
weights in the dynamic parameter layer. The proposed network---joint network
with the CNN for ImageQA and the parameter prediction network---is trained
end-to-end through back-propagation, where its weights are initialized using a
pre-trained CNN and GRU. The proposed algorithm illustrates the
state-of-the-art performance on all available public ImageQA benchmarks. | http://arxiv.org/pdf/1511.05756 | Hyeonwoo Noh, Paul Hongsuck Seo, Bohyung Han | cs.CV, cs.CL, cs.LG | null | null | cs.CV | 20151118 | 20151118 | [
{
"id": "1506.00333"
}
] |
1511.05756 | 27 | where µ (·, ·) denotes the thresholded Wu-Palmer similarity between prediction and ground-truth. We use two threshold values (0.9 and 0.0) in our evaluation.
VQA dataset provides open-ended task and multiple- choice task for evaluation. For open-ended task, the answer can be any word or phrase while an answer should be cho- sen out of 18 candidate answers in the multiple-choice task. In both cases, answers are evaluated by accuracy reï¬ecting human consensus. For predicted answer ai and target an- swer set T i of the ith example, the accuracy is given by
N 1 illa; =t Accyoa N > min { ter â ] ; i} (15) i=1 .
where I [·] denotes an indicator function. In other words, a predicted answer is regarded as a correct one if at least three annotators agree, and the score depends on the number of agreements if the predicted answer is not correct.
Table 1. Evaluation results on VQA test-dev in terms of AccVQA | 1511.05756#27 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | We tackle image question answering (ImageQA) problem by learning a
convolutional neural network (CNN) with a dynamic parameter layer whose weights
are determined adaptively based on questions. For the adaptive parameter
prediction, we employ a separate parameter prediction network, which consists
of gated recurrent unit (GRU) taking a question as its input and a
fully-connected layer generating a set of candidate weights as its output.
However, it is challenging to construct a parameter prediction network for a
large number of parameters in the fully-connected dynamic parameter layer of
the CNN. We reduce the complexity of this problem by incorporating a hashing
technique, where the candidate weights given by the parameter prediction
network are selected using a predefined hash function to determine individual
weights in the dynamic parameter layer. The proposed network---joint network
with the CNN for ImageQA and the parameter prediction network---is trained
end-to-end through back-propagation, where its weights are initialized using a
pre-trained CNN and GRU. The proposed algorithm illustrates the
state-of-the-art performance on all available public ImageQA benchmarks. | http://arxiv.org/pdf/1511.05756 | Hyeonwoo Noh, Paul Hongsuck Seo, Bohyung Han | cs.CV, cs.CL, cs.LG | null | null | cs.CV | 20151118 | 20151118 | [
{
"id": "1506.00333"
}
] |
1511.05756 | 28 | Table 1. Evaluation results on VQA test-dev in terms of AccVQA
All Y/N Num Others All Y/N Num Others Question [1] 48.09 75.66 36.70 27.14 53.68 75.71 37.05 38.64 28.13 64.01 00.42 03.77 30.53 69.87 00.45 03.76 52.64 75.55 33.67 37.37 58.97 75.59 34.35 50.33 LSTM Q [1] 48.76 78.20 35.68 26.59 54.75 78.22 36.82 38.78 LSTM Q+I [1] 53.74 78.94 35.24 36.42 57.17 78.95 35.80 43.41 54.70 77.09 36.62 39.67 59.92 77.10 37.48 50.31 RAND-GRU 55.46 79.58 36.20 39.23 61.18 79.64 38.07 50.63 CNN-FIXED 56.74 80.48 37.20 40.90 61.95 80.56 38.32 51.40 57.22 80.71 37.24 41.69 62.48 80.79 38.94 52.16
Table 2. Evaluation results on VQA test-standard | 1511.05756#28 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | We tackle image question answering (ImageQA) problem by learning a
convolutional neural network (CNN) with a dynamic parameter layer whose weights
are determined adaptively based on questions. For the adaptive parameter
prediction, we employ a separate parameter prediction network, which consists
of gated recurrent unit (GRU) taking a question as its input and a
fully-connected layer generating a set of candidate weights as its output.
However, it is challenging to construct a parameter prediction network for a
large number of parameters in the fully-connected dynamic parameter layer of
the CNN. We reduce the complexity of this problem by incorporating a hashing
technique, where the candidate weights given by the parameter prediction
network are selected using a predefined hash function to determine individual
weights in the dynamic parameter layer. The proposed network---joint network
with the CNN for ImageQA and the parameter prediction network---is trained
end-to-end through back-propagation, where its weights are initialized using a
pre-trained CNN and GRU. The proposed algorithm illustrates the
state-of-the-art performance on all available public ImageQA benchmarks. | http://arxiv.org/pdf/1511.05756 | Hyeonwoo Noh, Paul Hongsuck Seo, Bohyung Han | cs.CV, cs.CL, cs.LG | null | null | cs.CV | 20151118 | 20151118 | [
{
"id": "1506.00333"
}
] |
1511.05756 | 30 | We test three independent datasets, VQA, COCO-QA, and DAQUAR, and ï¬rst present the results for VQA dataset in Table 1. The proposed Dynamic Parameter Prediction network (DPPnet) outperforms all existing methods non- trivially. We performed controlled experiments to ana- lyze the contribution of individual components in the pro- posed algorithmâdynamic parameter prediction, use of pre-trained GRU and CNN ï¬ne-tuning, and trained 3 addi- tional models, CONCAT, RAND-GRU, and CNN-FIXED. CNN-FIXED is useful to see the impact of CNN ï¬ne-tuning since it is identical to DPPnet except that the weights in CNN are ï¬xed. RAND-GRU is the model without GRU pre-training, where the weights of GRU and word embed- ding model are initialized randomly. It does not ï¬ne-tune CNN either. CONCAT is the most basic model, which predicts answers using the two fully-connected layers for a combination of CNN and GRU features. Obviously, it does not employ any of new components such as parameter prediction, pre-trained GRU and CNN ï¬ne-tuning. | 1511.05756#30 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | We tackle image question answering (ImageQA) problem by learning a
convolutional neural network (CNN) with a dynamic parameter layer whose weights
are determined adaptively based on questions. For the adaptive parameter
prediction, we employ a separate parameter prediction network, which consists
of gated recurrent unit (GRU) taking a question as its input and a
fully-connected layer generating a set of candidate weights as its output.
However, it is challenging to construct a parameter prediction network for a
large number of parameters in the fully-connected dynamic parameter layer of
the CNN. We reduce the complexity of this problem by incorporating a hashing
technique, where the candidate weights given by the parameter prediction
network are selected using a predefined hash function to determine individual
weights in the dynamic parameter layer. The proposed network---joint network
with the CNN for ImageQA and the parameter prediction network---is trained
end-to-end through back-propagation, where its weights are initialized using a
pre-trained CNN and GRU. The proposed algorithm illustrates the
state-of-the-art performance on all available public ImageQA benchmarks. | http://arxiv.org/pdf/1511.05756 | Hyeonwoo Noh, Paul Hongsuck Seo, Bohyung Han | cs.CV, cs.CL, cs.LG | null | null | cs.CV | 20151118 | 20151118 | [
{
"id": "1506.00333"
}
] |
1511.05756 | 31 | The results of the controlled experiment are also illus- trated in Table 1. CONCAT already outperforms LSTM Q+I by integrating GRU instead of LSTM [4] and batch normalization. RAND-GRU achieves better accuracy by employing dynamic parameter prediction additionally. It is interesting that most of the improvement comes from yes/no questions, which may involve various kinds of tasks since it is easy to ask many different aspects in an input image for binary classiï¬cation. CNN-FIXED improves accuracy further by adding GRU pre-training, and our ï¬nal model DPPnet achieves the state-of-the-art performance on VQA dataset with large margins as illustrated in Table 1 and 2.
Table 3, 4, and 5 illustrate the results by all algorithms in- cluding ours that have reported performance on COCO-QA, DAQUAR-reduced, DAQUAR-all datasets. The proposed
Table 3. Evaluation results on COCO-QA | 1511.05756#31 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | We tackle image question answering (ImageQA) problem by learning a
convolutional neural network (CNN) with a dynamic parameter layer whose weights
are determined adaptively based on questions. For the adaptive parameter
prediction, we employ a separate parameter prediction network, which consists
of gated recurrent unit (GRU) taking a question as its input and a
fully-connected layer generating a set of candidate weights as its output.
However, it is challenging to construct a parameter prediction network for a
large number of parameters in the fully-connected dynamic parameter layer of
the CNN. We reduce the complexity of this problem by incorporating a hashing
technique, where the candidate weights given by the parameter prediction
network are selected using a predefined hash function to determine individual
weights in the dynamic parameter layer. The proposed network---joint network
with the CNN for ImageQA and the parameter prediction network---is trained
end-to-end through back-propagation, where its weights are initialized using a
pre-trained CNN and GRU. The proposed algorithm illustrates the
state-of-the-art performance on all available public ImageQA benchmarks. | http://arxiv.org/pdf/1511.05756 | Hyeonwoo Noh, Paul Hongsuck Seo, Bohyung Han | cs.CV, cs.CL, cs.LG | null | null | cs.CV | 20151118 | 20151118 | [
{
"id": "1506.00333"
}
] |
1511.05756 | 32 | Table 3. Evaluation results on COCO-QA
IMG+BOW [23] 2VIS+BLSTM [23] Ensemble [23] ConvQA [16] DPPnet Acc 55.92 55.09 57.84 54.95 61.19 WUPS 0.9 66.78 65.34 67.90 65.36 70.84 WUPS 0.0 88.99 88.64 89.52 88.58 90.61
Table 4. Evaluation results on DAQUAR reduced
Acc - 34.68 34.17 2VIS+BLSTM [23] 35.78 36.94 39.66 44.48 Multiworld [17] Askneuron [18] IMG+BOW [23] Ensemble [23] ConvQA [16] DPPnet Single answer 0.9 - Multiple answers 0.9 Acc 0.0 - 12.73 40.76 79.54 29.27 44.99 81.48 46.83 82.15 48.15 82.68 44.86 83.06 38.72 49.56 83.95 44.44 0.0 18.10 51.47 36.50 79.47 - - - - - - - - - 44.19 79.52 49.06 82.57
Table 5. Evaluation results on DAQUAR all | 1511.05756#32 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | We tackle image question answering (ImageQA) problem by learning a
convolutional neural network (CNN) with a dynamic parameter layer whose weights
are determined adaptively based on questions. For the adaptive parameter
prediction, we employ a separate parameter prediction network, which consists
of gated recurrent unit (GRU) taking a question as its input and a
fully-connected layer generating a set of candidate weights as its output.
However, it is challenging to construct a parameter prediction network for a
large number of parameters in the fully-connected dynamic parameter layer of
the CNN. We reduce the complexity of this problem by incorporating a hashing
technique, where the candidate weights given by the parameter prediction
network are selected using a predefined hash function to determine individual
weights in the dynamic parameter layer. The proposed network---joint network
with the CNN for ImageQA and the parameter prediction network---is trained
end-to-end through back-propagation, where its weights are initialized using a
pre-trained CNN and GRU. The proposed algorithm illustrates the
state-of-the-art performance on all available public ImageQA benchmarks. | http://arxiv.org/pdf/1511.05756 | Hyeonwoo Noh, Paul Hongsuck Seo, Bohyung Han | cs.CV, cs.CL, cs.LG | null | null | cs.CV | 20151118 | 20151118 | [
{
"id": "1506.00333"
}
] |
1511.05756 | 33 | Table 5. Evaluation results on DAQUAR all
Human [17] Multiworld [17] Askneuron [18] ConvQA [16] DPPnet Single answer 0.9 - - Multiple answers 0.9 Acc - - 19.43 23.40 28.98 Acc 0.0 50.20 - - 07.86 25.28 62.00 17.49 29.59 62.95 20.69 34.80 67.81 25.60 0.0 50.82 67.27 11.86 38.79 23.28 57.76 25.89 55.48 31.03 60.77
algorithm outperforms all existing approaches consistently in all benchmarks. In Table 4 and 5, single answer and mul- tiple answers denote the two subsets of questions divided by the number of ground-truth answers. Also, the numbers (0.9 and 0.0) in the second rows are WUPS thresholds.
To understand how the parameter prediction network un- derstand questions, we present several representative ques- tions before and after ï¬ne-tuning GRU in a descending or- der based on their cosine similarities to the query ques- tion in Table 6. The retrieved sentences are frequently de- termined by common subjective or objective words before ï¬ne-tuning while they rely more on the tasks to be solved after ï¬ne-tuning. | 1511.05756#33 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | We tackle image question answering (ImageQA) problem by learning a
convolutional neural network (CNN) with a dynamic parameter layer whose weights
are determined adaptively based on questions. For the adaptive parameter
prediction, we employ a separate parameter prediction network, which consists
of gated recurrent unit (GRU) taking a question as its input and a
fully-connected layer generating a set of candidate weights as its output.
However, it is challenging to construct a parameter prediction network for a
large number of parameters in the fully-connected dynamic parameter layer of
the CNN. We reduce the complexity of this problem by incorporating a hashing
technique, where the candidate weights given by the parameter prediction
network are selected using a predefined hash function to determine individual
weights in the dynamic parameter layer. The proposed network---joint network
with the CNN for ImageQA and the parameter prediction network---is trained
end-to-end through back-propagation, where its weights are initialized using a
pre-trained CNN and GRU. The proposed algorithm illustrates the
state-of-the-art performance on all available public ImageQA benchmarks. | http://arxiv.org/pdf/1511.05756 | Hyeonwoo Noh, Paul Hongsuck Seo, Bohyung Han | cs.CV, cs.CL, cs.LG | null | null | cs.CV | 20151118 | 20151118 | [
{
"id": "1506.00333"
}
] |
1511.05756 | 34 | The qualitative results of the proposed algorithm are pre- sented in Figure 4. In general, the proposed network is suc- cessful to handle various types of questions that need differ- ent levels of semantic understanding. Figure 4(a) shows that the network is able to adapt recognition tasks depending on questions. However, it often fails in the questions asking the number of occurrences since these questions involve the dif- ï¬cult tasks (e.g., object detection) to learn only with image level annotations. On the other hand, the proposed network is effective to ï¬nd the answers for the same question on dif- ferent images fairly well as illustrated in Figure 4(b). Refer to our project website2 for more comprehensive qualitative results.
# 2http://cvlab.postech.ac.kr/research/dppnet/
Table 6. Retrieved sentences before and after ï¬ne-tuning GRU | 1511.05756#34 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | We tackle image question answering (ImageQA) problem by learning a
convolutional neural network (CNN) with a dynamic parameter layer whose weights
are determined adaptively based on questions. For the adaptive parameter
prediction, we employ a separate parameter prediction network, which consists
of gated recurrent unit (GRU) taking a question as its input and a
fully-connected layer generating a set of candidate weights as its output.
However, it is challenging to construct a parameter prediction network for a
large number of parameters in the fully-connected dynamic parameter layer of
the CNN. We reduce the complexity of this problem by incorporating a hashing
technique, where the candidate weights given by the parameter prediction
network are selected using a predefined hash function to determine individual
weights in the dynamic parameter layer. The proposed network---joint network
with the CNN for ImageQA and the parameter prediction network---is trained
end-to-end through back-propagation, where its weights are initialized using a
pre-trained CNN and GRU. The proposed algorithm illustrates the
state-of-the-art performance on all available public ImageQA benchmarks. | http://arxiv.org/pdf/1511.05756 | Hyeonwoo Noh, Paul Hongsuck Seo, Bohyung Han | cs.CV, cs.CL, cs.LG | null | null | cs.CV | 20151118 | 20151118 | [
{
"id": "1506.00333"
}
] |
1511.05756 | 35 | Query question What body part has most recently contacted the ball? Is the person feeding the birds? Before ï¬ne-tuning After ï¬ne-tuning What shape is the ball? What colors are the ball? What team has the ball? How many times has the girl hit the ball? What number is on the womenâs Jersey closest to the ball? What is unusual about the ball? What is the speed of the ball? What body part is the boy holding the bear by? What body part is on the right side of this picture? What human body part is on the table? What body parts appear to be touching? What partial body parts are in the foreground? What part of the body does the woman on the left have on the ramp? Is he feeding the birds? Is the reptile ï¬ghting the birds? Does the elephant want to play with the birds? What is the fence made of behind the birds? Where are the majority of the birds? What colors are the birds? Is this man feeding the pigeons? Is he feeding the birds? Is the person feeding the sheep? Is the man feeding the pigeons? Is she feeding the pigeons? Is that the zookeeper feeding the giraffes? Is the reptile ï¬ghting the birds? Name a body part that would not be visible if the womanâs mouth was closed? Does the elephant want to play with the birds? | 1511.05756#35 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | We tackle image question answering (ImageQA) problem by learning a
convolutional neural network (CNN) with a dynamic parameter layer whose weights
are determined adaptively based on questions. For the adaptive parameter
prediction, we employ a separate parameter prediction network, which consists
of gated recurrent unit (GRU) taking a question as its input and a
fully-connected layer generating a set of candidate weights as its output.
However, it is challenging to construct a parameter prediction network for a
large number of parameters in the fully-connected dynamic parameter layer of
the CNN. We reduce the complexity of this problem by incorporating a hashing
technique, where the candidate weights given by the parameter prediction
network are selected using a predefined hash function to determine individual
weights in the dynamic parameter layer. The proposed network---joint network
with the CNN for ImageQA and the parameter prediction network---is trained
end-to-end through back-propagation, where its weights are initialized using a
pre-trained CNN and GRU. The proposed algorithm illustrates the
state-of-the-art performance on all available public ImageQA benchmarks. | http://arxiv.org/pdf/1511.05756 | Hyeonwoo Noh, Paul Hongsuck Seo, Bohyung Han | cs.CV, cs.CL, cs.LG | null | null | cs.CV | 20151118 | 20151118 | [
{
"id": "1506.00333"
}
] |
1511.05756 | 36 | e N Q: How does the woman feel? DPPnet: happy Q: What type of hat is she wearing? DPPnet: cowboy
VW bs = Q: Is it raining? DPPnet: no âQ: What is he holding? DPPnet: umbrella
Q: What is he doing? DPPnet: skateboarding Q: Is this person dancing? DPPnet: no
= 2 Q: How many cranes are in the image? DPPnet: 2 (3) Q: How many people are on the bench? DPPnet: 2 (1)
(a) Result of the proposed algorithm on multiple questions for a single image
Q: What is the boy holding?
DPPnet: surfboard | 1511.05756#36 | Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction | We tackle image question answering (ImageQA) problem by learning a
convolutional neural network (CNN) with a dynamic parameter layer whose weights
are determined adaptively based on questions. For the adaptive parameter
prediction, we employ a separate parameter prediction network, which consists
of gated recurrent unit (GRU) taking a question as its input and a
fully-connected layer generating a set of candidate weights as its output.
However, it is challenging to construct a parameter prediction network for a
large number of parameters in the fully-connected dynamic parameter layer of
the CNN. We reduce the complexity of this problem by incorporating a hashing
technique, where the candidate weights given by the parameter prediction
network are selected using a predefined hash function to determine individual
weights in the dynamic parameter layer. The proposed network---joint network
with the CNN for ImageQA and the parameter prediction network---is trained
end-to-end through back-propagation, where its weights are initialized using a
pre-trained CNN and GRU. The proposed algorithm illustrates the
state-of-the-art performance on all available public ImageQA benchmarks. | http://arxiv.org/pdf/1511.05756 | Hyeonwoo Noh, Paul Hongsuck Seo, Bohyung Han | cs.CV, cs.CL, cs.LG | null | null | cs.CV | 20151118 | 20151118 | [
{
"id": "1506.00333"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.