doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1611.01673 | 26 | to arise from more complex game dynamics. For this reason, game theory and game design will likely be important. ACKNOWLEDGMENTS We acknowledge helpful conversations with Stefan Dernbach, Archan Ray, Luke Vilnis, Ben Turtel, Stephen Giguere, Rajarshi Das, and Subhransu Maji. We also thank NVIDIA for donating a K40 GPU. This material is based upon work supported by the National Science Foundation under Grant Nos. IIS-1564032. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF. | 1611.01673#26 | Generative Multi-Adversarial Networks | Generative adversarial networks (GANs) are a framework for producing a
generative model by way of a two-player minimax game. In this paper, we propose
the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that
extends GANs to multiple discriminators. In previous work, the successful
training of GANs requires modifying the minimax objective to accelerate
training early on. In contrast, GMAN can be reliably trained with the original,
untampered objective. We explore a number of design perspectives with the
discriminator role ranging from formidable adversary to forgiving teacher.
Image generation tasks comparing the proposed framework to standard GANs
demonstrate GMAN produces higher quality samples in a fraction of the
iterations when measured by a pairwise GAM-type metric. | http://arxiv.org/pdf/1611.01673 | Ishan Durugkar, Ian Gemp, Sridhar Mahadevan | cs.LG, cs.MA, cs.NE | Accepted as a conference paper (poster) at ICLR 2017 | null | cs.LG | 20161105 | 20170302 | [
{
"id": "1511.06390"
},
{
"id": "1511.05897"
},
{
"id": "1610.02920"
}
] |
1611.01576 | 27 | 3.3 CHARACTER-LEVEL NEURAL MACHINE TRANSLATION
We evaluate the sequence-to-sequence QRNN architecture described in 2.1 on a challenging neu- ral machine translation task, IWSLT GermanâEnglish spoken-domain translation, applying fully character-level segmentation. This dataset consists of 209,772 sentence pairs of parallel training data from transcribed TED and TEDx presentations, with a mean sentence length of 103 characters for German and 93 for English. We remove training sentences with more than 300 characters in English or German, and use a uniï¬ed vocabulary of 187 Unicode code points. | 1611.01576#27 | Quasi-Recurrent Neural Networks | Recurrent neural networks are a powerful tool for modeling sequential data,
but the dependence of each timestep's computation on the previous timestep's
output limits parallelism and makes RNNs unwieldy for very long sequences. We
introduce quasi-recurrent neural networks (QRNNs), an approach to neural
sequence modeling that alternates convolutional layers, which apply in parallel
across timesteps, and a minimalist recurrent pooling function that applies in
parallel across channels. Despite lacking trainable recurrent layers, stacked
QRNNs have better predictive accuracy than stacked LSTMs of the same hidden
size. Due to their increased parallelism, they are up to 16 times faster at
train and test time. Experiments on language modeling, sentiment
classification, and character-level neural machine translation demonstrate
these advantages and underline the viability of QRNNs as a basic building block
for a variety of sequence tasks. | http://arxiv.org/pdf/1611.01576 | James Bradbury, Stephen Merity, Caiming Xiong, Richard Socher | cs.NE, cs.AI, cs.CL, cs.LG | Submitted to conference track at ICLR 2017 | null | cs.NE | 20161105 | 20161121 | [
{
"id": "1605.07725"
},
{
"id": "1508.06615"
},
{
"id": "1606.01305"
},
{
"id": "1610.10099"
},
{
"id": "1609.08144"
},
{
"id": "1602.00367"
},
{
"id": "1511.08630"
},
{
"id": "1609.07843"
},
{
"id": "1608.06993"
},
{
"id": "1610.03017"
},
{
"id": "1601.06759"
},
{
"id": "1606.02960"
}
] |
1611.01600 | 27 | # Table 1: Test error rates (%) for feedforward neural network models.
(no binarization) (binarize weights) (binarize weights and activations) full-precision BinaryConnect BWN LAB BNN XNOR LAB2 MNIST CIFAR-10 1.190 1.280 1.310 1.180 1.470 1.530 1.380 11.900 9.860 10.510 10.500 12.870 12.620 12.280 SVHN 2.277 2.450 2.535 2.354 3.500 3.435 3.362
4.2 VARYING THE NUMBER OF FILTERS IN CNN
As in Zhou et al. (2016), we study sensitivity to network width by varying the number of ï¬lters K on the SVHN data set. As in Section 4.1, we use the model
(2 Ã KC3) â M P 2 â (2 Ã 2KC3) â M P 2 â (2 Ã 4KC3) â M P 2 â (2 Ã 1024F C) â 10SV M.
Results are shown in Table 2. Again, the proposed LAB has the best performance. Moreover, as the number of ï¬lters increases, degradation due to binarization becomes less severe. This suggests
6
Published as a conference paper at ICLR 2017 | 1611.01600#27 | Loss-aware Binarization of Deep Networks | Deep neural network models, though very powerful and highly successful, are
computationally expensive in terms of space and time. Recently, there have been
a number of attempts on binarizing the network weights and activations. This
greatly reduces the network size, and replaces the underlying multiplications
to additions or even XNOR bit operations. However, existing binarization
schemes are based on simple matrix approximation and ignore the effect of
binarization on the loss. In this paper, we propose a proximal Newton algorithm
with diagonal Hessian approximation that directly minimizes the loss w.r.t. the
binarized weights. The underlying proximal step has an efficient closed-form
solution, and the second-order information can be efficiently obtained from the
second moments already computed by the Adam optimizer. Experiments on both
feedforward and recurrent networks show that the proposed loss-aware
binarization algorithm outperforms existing binarization schemes, and is also
more robust for wide and deep networks. | http://arxiv.org/pdf/1611.01600 | Lu Hou, Quanming Yao, James T. Kwok | cs.NE, cs.LG | null | null | cs.NE | 20161105 | 20180510 | [
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1502.04390"
}
] |
1611.01626 | 27 | b) â Q(s, a), which would be equivalent to using an optimizing critic that bootstraps using the max Q-value at the next state, i.e, Q7(s,a) = r(s,a) + ymaxp Q (sâ,b). In REINFORCE the critic is the Monte Carlo return from that state on, ie, Q"(s,a) = (Pg 7'Tt | 80 = 8,a9 = a). If the return trace is truncated and a bootstrap is performed after n-steps, this is equivalent to n-step SARSA or n-step Q-learning, depending on the form of the bootstrap (Peng & Williams} |T996p. | 1611.01626#27 | Combining policy gradient and Q-learning | Policy gradient is an efficient technique for improving a policy in a
reinforcement learning setting. However, vanilla online variants are on-policy
only and not able to take advantage of off-policy data. In this paper we
describe a new technique that combines policy gradient with off-policy
Q-learning, drawing experience from a replay buffer. This is motivated by
making a connection between the fixed points of the regularized policy gradient
algorithm and the Q-values. This connection allows us to estimate the Q-values
from the action preferences of the policy, to which we apply Q-learning
updates. We refer to the new technique as 'PGQL', for policy gradient and
Q-learning. We also establish an equivalency between action-value fitting
techniques and actor-critic algorithms, showing that regularized policy
gradient techniques can be interpreted as advantage function learning
algorithms. We conclude with some numerical examples that demonstrate improved
data efficiency and stability of PGQL. In particular, we tested PGQL on the
full suite of Atari games and achieved performance exceeding that of both
asynchronous advantage actor-critic (A3C) and Q-learning. | http://arxiv.org/pdf/1611.01626 | Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih | cs.LG, cs.AI, math.OC, stat.ML | null | null | cs.LG | 20161105 | 20170407 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1609.00150"
},
{
"id": "1512.04105"
},
{
"id": "1511.05952"
},
{
"id": "1504.00702"
}
] |
1611.01673 | 27 | Published as a conference paper at ICLR 2017
# BIBLIOGRAPHY
Martin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv: 1603.04467, 2016.
Hana Ajakan, Pascal Germain, Hugo Larochelle, Frangois Laviolette, and Mario Marchand. Domain-adversarial neural networks. arXiv preprint arXiv: 1412.4446, 2014.
J Andrew Bagnell. Robust supervised learning. In Proceedings Of The National Conference On Artificial Intelligence, volume 20, pp. 714. Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999, 2005.
Alina Beygelzimer, Satyen Kale, and Haipeng Luo. Optimal and adaptive algorithms for online boosting. arXiv preprint arXiv: 1502.02651, 2015. | 1611.01673#27 | Generative Multi-Adversarial Networks | Generative adversarial networks (GANs) are a framework for producing a
generative model by way of a two-player minimax game. In this paper, we propose
the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that
extends GANs to multiple discriminators. In previous work, the successful
training of GANs requires modifying the minimax objective to accelerate
training early on. In contrast, GMAN can be reliably trained with the original,
untampered objective. We explore a number of design perspectives with the
discriminator role ranging from formidable adversary to forgiving teacher.
Image generation tasks comparing the proposed framework to standard GANs
demonstrate GMAN produces higher quality samples in a fraction of the
iterations when measured by a pairwise GAM-type metric. | http://arxiv.org/pdf/1611.01673 | Ishan Durugkar, Ian Gemp, Sridhar Mahadevan | cs.LG, cs.MA, cs.NE | Accepted as a conference paper (poster) at ICLR 2017 | null | cs.LG | 20161105 | 20170302 | [
{
"id": "1511.06390"
},
{
"id": "1511.05897"
},
{
"id": "1610.02920"
}
] |
1611.01576 | 28 | Our best performance on a development set (TED.tst2013) was achieved using a four-layer encoderâ decoder QRNN with 320 units per layer, no dropout or L? regularization, and gradient rescaling to a maximum magnitude of 5. Inputs were supplied to the encoder reversed, while the encoder convolutions were not masked. The first encoder layer used convolutional filter width k = 6, while the other encoder layers used k = 2. Optimization was performed for 10 epochs on minibatches of 16 examples using Adam (Kingma & Ba, 2014) with a = 0.001, 6; = 0.9, 62 = 0.999, and ⬠= 10-8. Decoding was performed using beam search with beam width 8 and length normalization a = 0.6. The modified log-probability ranking criterion is provided in the appendix.
Results using this architecture were compared to an equal-sized four-layer encoderâdecoder LSTM with attention, applying dropout of 0.2. We again optimized using Adam; other hyperparameters were equal to their values for the QRNN and the same beam search procedure was applied. Table 3 shows that the QRNN outperformed the character-level LSTM, almost matching the performance of a word-level attentional baseline.
7
# Under review as a conference paper at ICLR 2017 | 1611.01576#28 | Quasi-Recurrent Neural Networks | Recurrent neural networks are a powerful tool for modeling sequential data,
but the dependence of each timestep's computation on the previous timestep's
output limits parallelism and makes RNNs unwieldy for very long sequences. We
introduce quasi-recurrent neural networks (QRNNs), an approach to neural
sequence modeling that alternates convolutional layers, which apply in parallel
across timesteps, and a minimalist recurrent pooling function that applies in
parallel across channels. Despite lacking trainable recurrent layers, stacked
QRNNs have better predictive accuracy than stacked LSTMs of the same hidden
size. Due to their increased parallelism, they are up to 16 times faster at
train and test time. Experiments on language modeling, sentiment
classification, and character-level neural machine translation demonstrate
these advantages and underline the viability of QRNNs as a basic building block
for a variety of sequence tasks. | http://arxiv.org/pdf/1611.01576 | James Bradbury, Stephen Merity, Caiming Xiong, Richard Socher | cs.NE, cs.AI, cs.CL, cs.LG | Submitted to conference track at ICLR 2017 | null | cs.NE | 20161105 | 20161121 | [
{
"id": "1605.07725"
},
{
"id": "1508.06615"
},
{
"id": "1606.01305"
},
{
"id": "1610.10099"
},
{
"id": "1609.08144"
},
{
"id": "1602.00367"
},
{
"id": "1511.08630"
},
{
"id": "1609.07843"
},
{
"id": "1608.06993"
},
{
"id": "1610.03017"
},
{
"id": "1601.06759"
},
{
"id": "1606.02960"
}
] |
1611.01578 | 28 | Finally, if we allow the controller to include 2 pooling layers at layer 13 and layer 24 of the archi- tectures, the controller can design a 39-layer network that achieves 4.47% which is very close to the best human-invented architecture that achieves 3.74%. To limit the search space complexity we have our model predict 13 layers where each layer prediction is a fully connected block of 3 layers. Additionally, we change the number of ï¬lters our model can predict from [24, 36, 48, 64] to [6, 12, 24, 36]. Our result can be improved to 3.65% by adding 40 more ï¬lters to each layer of our archi- tecture. Additionally this model with 40 ï¬lters added is 1.05x as fast as the DenseNet model that achieves 3.74%, while having better performance. The DenseNet model that achieves 3.46% error rate (Huang et al., 2016b) uses 1x1 convolutions to reduce its total number of parameters, which we did not do, so it is not an exact comparison.
4.2 LEARNING RECURRENT CELLS FOR PENN TREEBANK | 1611.01578#28 | Neural Architecture Search with Reinforcement Learning | Neural networks are powerful and flexible models that work well for many
difficult learning tasks in image, speech and natural language understanding.
Despite their success, neural networks are still hard to design. In this paper,
we use a recurrent network to generate the model descriptions of neural
networks and train this RNN with reinforcement learning to maximize the
expected accuracy of the generated architectures on a validation set. On the
CIFAR-10 dataset, our method, starting from scratch, can design a novel network
architecture that rivals the best human-invented architecture in terms of test
set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is
0.09 percent better and 1.05x faster than the previous state-of-the-art model
that used a similar architectural scheme. On the Penn Treebank dataset, our
model can compose a novel recurrent cell that outperforms the widely-used LSTM
cell, and other state-of-the-art baselines. Our cell achieves a test set
perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than
the previous state-of-the-art model. The cell can also be transferred to the
character language modeling task on PTB and achieves a state-of-the-art
perplexity of 1.214. | http://arxiv.org/pdf/1611.01578 | Barret Zoph, Quoc V. Le | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20161105 | 20170215 | [
{
"id": "1611.01462"
},
{
"id": "1607.03474"
},
{
"id": "1603.05027"
},
{
"id": "1609.09106"
},
{
"id": "1511.06732"
},
{
"id": "1508.06615"
},
{
"id": "1606.04474"
},
{
"id": "1608.05859"
},
{
"id": "1609.08144"
},
{
"id": "1606.01885"
},
{
"id": "1505.00387"
},
{
"id": "1609.07843"
},
{
"id": "1512.05287"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1611.01600 | 28 | 6
Published as a conference paper at ICLR 2017
(a) MNIST. (b) CIFAR-10. (c) SVHN.
0.06 soos ae < © 0.02 0 2 1077) 20) 230, 40, 50 epochs
ot ; 0.08 80.06 s © 0.04 = 0.02 0 0 10 20 30 40 50 epochs
04 03 3 10.2 ia & 0.1 0 0 50 100 150 200 epochs
Figure 1: Convergence of LAB with feedforward neural networks.
that more powerful models (e.g., CNN with more ï¬lters, standard feedforward networks with more hidden units) are less susceptible to performance degradation due to binarization. We speculate that this is because large networks often have larger-than-needed capacities, and so are less affected by the limited expressiveness of binary weights. Another related reason is that binarization acts as regularization, and so contributes positively to the performance.
Table 2: Test error rates (%) on SVHN, for CNNs with different numbers of ï¬lters. Number in brackets is the difference between the errors of the binarized scheme and the full-precision network. K = 32 2.585 2.777 (0.192) 2.743 (0.158) 2.742 (0.157) | 1611.01600#28 | Loss-aware Binarization of Deep Networks | Deep neural network models, though very powerful and highly successful, are
computationally expensive in terms of space and time. Recently, there have been
a number of attempts on binarizing the network weights and activations. This
greatly reduces the network size, and replaces the underlying multiplications
to additions or even XNOR bit operations. However, existing binarization
schemes are based on simple matrix approximation and ignore the effect of
binarization on the loss. In this paper, we propose a proximal Newton algorithm
with diagonal Hessian approximation that directly minimizes the loss w.r.t. the
binarized weights. The underlying proximal step has an efficient closed-form
solution, and the second-order information can be efficiently obtained from the
second moments already computed by the Adam optimizer. Experiments on both
feedforward and recurrent networks show that the proposed loss-aware
binarization algorithm outperforms existing binarization schemes, and is also
more robust for wide and deep networks. | http://arxiv.org/pdf/1611.01600 | Lu Hou, Quanming Yao, James T. Kwok | cs.NE, cs.LG | null | null | cs.NE | 20161105 | 20180510 | [
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1502.04390"
}
] |
1611.01603 | 28 | Layer Query Closest words in the Context using cosine similarity Word When Contextual When Word Where Contextual Where Word Who Contextual Who city Word city Contextual January Word January Contextual Seahawks Word Seahawks Contextual date Word date Contextual when, When, After, after, He, he, But, but, before, Before When, when, 1945, 1991, 1971, 1967, 1990, 1972, 1965, 1953 Where, where, It, IT, it, they, They, that, That, city where, Where, Rotterdam, area, Nearby, location, outside, Area, across, locations Who, who, He, he, had, have, she, She, They, they who, whose, whom, Guiscard, person, John, Thomas, families, Elway, Louis City, city, town, Town, Capital, capital, district, cities, province, Downtown city, City, Angeles, Paris, Prague, Chicago, Port, Pittsburgh, London, Manhattan July, December, June, October, January, September, February, April, November, March January, March, December, August, December, July, July, July, March, December Seahawks, Broncos, 49ers, Ravens, Chargers, Steelers, quarterback, Vikings, Colts, NFL Seahawks, Broncos, | 1611.01603#28 | Bidirectional Attention Flow for Machine Comprehension | Machine comprehension (MC), answering a query about a given context
paragraph, requires modeling complex interactions between the context and the
query. Recently, attention mechanisms have been successfully extended to MC.
Typically these methods use attention to focus on a small portion of the
context and summarize it with a fixed-size vector, couple attentions
temporally, and/or often form a uni-directional attention. In this paper we
introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage
hierarchical process that represents the context at different levels of
granularity and uses bi-directional attention flow mechanism to obtain a
query-aware context representation without early summarization. Our
experimental evaluations show that our model achieves the state-of-the-art
results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze
test. | http://arxiv.org/pdf/1611.01603 | Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi | cs.CL | Published as a conference paper at ICLR 2017 | null | cs.CL | 20161105 | 20180621 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1611.01604"
},
{
"id": "1609.05284"
},
{
"id": "1610.09996"
},
{
"id": "1606.01549"
},
{
"id": "1511.02274"
},
{
"id": "1505.00387"
},
{
"id": "1611.01436"
},
{
"id": "1611.01724"
},
{
"id": "1607.04423"
}
] |
1611.01626 | 28 | 3.4 BELLMAN RESIDUAL
In this section we show that ||7*Q** â Q7«|| > 0 with decreasing regularization penalty a, where Tq is the policy defined by (4) and Q* is the corresponding Q-value function, both of which are functions of a. We shall show that it converges to zero by bounding the sequence below by zero and above with a sequence that converges to zero. First, we have that T*Q7* > Tâ¢Â°Qâ¢* = Qâ¢, since J* is greedy with respect to the Q-values. So T*Q7« â Q7* > 0. Now, to bound from above we need the fact that 7.(s,a) = exp(Q**(s, a)/a)/ >>, exp(Q7*(s, b)/a) < exp((Q7*(s, a) â max, Q7*(s,c))/a). Using this we have | 1611.01626#28 | Combining policy gradient and Q-learning | Policy gradient is an efficient technique for improving a policy in a
reinforcement learning setting. However, vanilla online variants are on-policy
only and not able to take advantage of off-policy data. In this paper we
describe a new technique that combines policy gradient with off-policy
Q-learning, drawing experience from a replay buffer. This is motivated by
making a connection between the fixed points of the regularized policy gradient
algorithm and the Q-values. This connection allows us to estimate the Q-values
from the action preferences of the policy, to which we apply Q-learning
updates. We refer to the new technique as 'PGQL', for policy gradient and
Q-learning. We also establish an equivalency between action-value fitting
techniques and actor-critic algorithms, showing that regularized policy
gradient techniques can be interpreted as advantage function learning
algorithms. We conclude with some numerical examples that demonstrate improved
data efficiency and stability of PGQL. In particular, we tested PGQL on the
full suite of Atari games and achieved performance exceeding that of both
asynchronous advantage actor-critic (A3C) and Q-learning. | http://arxiv.org/pdf/1611.01626 | Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih | cs.LG, cs.AI, math.OC, stat.ML | null | null | cs.LG | 20161105 | 20170407 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1609.00150"
},
{
"id": "1512.04105"
},
{
"id": "1511.05952"
},
{
"id": "1504.00702"
}
] |
1611.01673 | 28 | Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Info- gan: Interpretable representation learning by information maximizing generative adversarial nets. arXiv preprint arXiv: 1606.03657, 2016.
Emily L Denton, Soumith Chintala, Rob Fergus, et al. Deep generative image models using a laplacian pyramid of adversarial networks. In Advances in neural information processing systems, pp. 1486-1494, 2015.
Jeff Donahue, Philipp Krahenbiihl, and Trevor Darrell. Adversarial feature learning. arXiv preprint arXiv: 1605.09782, 2016.
Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropi- etro, and Aaron Courville. Adversarially learned inference. arXiv preprint arXiv: 1606.00704, 2016.
Harrison Edwards and Amos Storkey. Censoring representations with an adversary. arXiv preprint arXiv:1511.05897, 2015.
Jon Gauthier. Conditional generative adversarial nets for convolutional face generation. Class Project for Stanford CS231N: Convolutional Neural Networks for Visual Recognition, Winter semester, 2014, 2014. | 1611.01673#28 | Generative Multi-Adversarial Networks | Generative adversarial networks (GANs) are a framework for producing a
generative model by way of a two-player minimax game. In this paper, we propose
the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that
extends GANs to multiple discriminators. In previous work, the successful
training of GANs requires modifying the minimax objective to accelerate
training early on. In contrast, GMAN can be reliably trained with the original,
untampered objective. We explore a number of design perspectives with the
discriminator role ranging from formidable adversary to forgiving teacher.
Image generation tasks comparing the proposed framework to standard GANs
demonstrate GMAN produces higher quality samples in a fraction of the
iterations when measured by a pairwise GAM-type metric. | http://arxiv.org/pdf/1611.01673 | Ishan Durugkar, Ian Gemp, Sridhar Mahadevan | cs.LG, cs.MA, cs.NE | Accepted as a conference paper (poster) at ICLR 2017 | null | cs.LG | 20161105 | 20170302 | [
{
"id": "1511.06390"
},
{
"id": "1511.05897"
},
{
"id": "1610.02920"
}
] |
1611.01576 | 29 | 7
# Under review as a conference paper at ICLR 2017
Model Train Time BLEU (TED.tst2014) Word-level LSTM w/attn (Ranzato et al., 2016) Word-level CNN w/attn, input feeding (Wiseman & Rush, 2016) â â 20.2 24.0 Our models Char-level 4-layer LSTM Char-level 4-layer QRNN with k = 6 4.2 hrs/epoch 1.0 hrs/epoch 16.53 19.41
Table 3: Translation performance, measured by BLEU, and train speed in hours per epoch, for the IWSLT German-English spoken language translation task. All models were trained on in-domain data only, and use negative log-likelihood as the training criterion. Our models were trained for 10 epochs. The QRNN model uses k = 2 for all layers other than the ï¬rst encoder layer.
# 4 RELATED WORK | 1611.01576#29 | Quasi-Recurrent Neural Networks | Recurrent neural networks are a powerful tool for modeling sequential data,
but the dependence of each timestep's computation on the previous timestep's
output limits parallelism and makes RNNs unwieldy for very long sequences. We
introduce quasi-recurrent neural networks (QRNNs), an approach to neural
sequence modeling that alternates convolutional layers, which apply in parallel
across timesteps, and a minimalist recurrent pooling function that applies in
parallel across channels. Despite lacking trainable recurrent layers, stacked
QRNNs have better predictive accuracy than stacked LSTMs of the same hidden
size. Due to their increased parallelism, they are up to 16 times faster at
train and test time. Experiments on language modeling, sentiment
classification, and character-level neural machine translation demonstrate
these advantages and underline the viability of QRNNs as a basic building block
for a variety of sequence tasks. | http://arxiv.org/pdf/1611.01576 | James Bradbury, Stephen Merity, Caiming Xiong, Richard Socher | cs.NE, cs.AI, cs.CL, cs.LG | Submitted to conference track at ICLR 2017 | null | cs.NE | 20161105 | 20161121 | [
{
"id": "1605.07725"
},
{
"id": "1508.06615"
},
{
"id": "1606.01305"
},
{
"id": "1610.10099"
},
{
"id": "1609.08144"
},
{
"id": "1602.00367"
},
{
"id": "1511.08630"
},
{
"id": "1609.07843"
},
{
"id": "1608.06993"
},
{
"id": "1610.03017"
},
{
"id": "1601.06759"
},
{
"id": "1606.02960"
}
] |
1611.01578 | 29 | 4.2 LEARNING RECURRENT CELLS FOR PENN TREEBANK
Dataset: We apply Neural Architecture Search to the Penn Treebank dataset, a well-known bench- mark for language modeling. On this task, LSTM architectures tend to excel (Zaremba et al., 2014; Gal, 2015), and improving them is difï¬cult (Jozefowicz et al., 2015). As PTB is a small dataset, reg- ularization methods are needed to avoid overï¬tting. First, we make use of the embedding dropout and recurrent dropout techniques proposed in Zaremba et al. (2014) and (Gal, 2015). We also try to combine them with the method of sharing Input and Output embeddings, e.g., Bengio et al. (2003); Mnih & Hinton (2007), especially Inan et al. (2016) and Press & Wolf (2016). Results with this method are marked with âshared embeddings.â | 1611.01578#29 | Neural Architecture Search with Reinforcement Learning | Neural networks are powerful and flexible models that work well for many
difficult learning tasks in image, speech and natural language understanding.
Despite their success, neural networks are still hard to design. In this paper,
we use a recurrent network to generate the model descriptions of neural
networks and train this RNN with reinforcement learning to maximize the
expected accuracy of the generated architectures on a validation set. On the
CIFAR-10 dataset, our method, starting from scratch, can design a novel network
architecture that rivals the best human-invented architecture in terms of test
set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is
0.09 percent better and 1.05x faster than the previous state-of-the-art model
that used a similar architectural scheme. On the Penn Treebank dataset, our
model can compose a novel recurrent cell that outperforms the widely-used LSTM
cell, and other state-of-the-art baselines. Our cell achieves a test set
perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than
the previous state-of-the-art model. The cell can also be transferred to the
character language modeling task on PTB and achieves a state-of-the-art
perplexity of 1.214. | http://arxiv.org/pdf/1611.01578 | Barret Zoph, Quoc V. Le | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20161105 | 20170215 | [
{
"id": "1611.01462"
},
{
"id": "1607.03474"
},
{
"id": "1603.05027"
},
{
"id": "1609.09106"
},
{
"id": "1511.06732"
},
{
"id": "1508.06615"
},
{
"id": "1606.04474"
},
{
"id": "1608.05859"
},
{
"id": "1609.08144"
},
{
"id": "1606.01885"
},
{
"id": "1505.00387"
},
{
"id": "1609.07843"
},
{
"id": "1512.05287"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1611.01600 | 29 | K = 16 2.738 3.200 (0.462) 3.119 (0.461) 3.050 (0.312) K = 64 2.277 2.450 (0.173) 2.535 (0.258) 2.354 (0.077) K = 128 2.146 2.315 (0.169) 2.319 (0.173) 2.200 (0.054) full-precision BinaryConnect BWN LAB
4.3 RECURRENT NEURAL NETWORKS
In this section, we perform experiments on the popular long short-term memory (LSTM) (Hochre- iter & Schmidhuber, 1997). Performance is evaluated in the context of character-level language modeling. The LSTM takes as input a sequence of characters, and predicts the next character at each time step. The training objective is the cross-entropy loss over all target sequences. Following Karpathy et al. (2016), we use two data sets (with the same training/validation/test set splitting): (i) Leo Tolstoyâs War and Peace, which consists of 3258246 characters of almost entirely English text with minimal markup and has a vocabulary size of 87; and (ii) the source code of the Linux Kernel, which consists of 6206996 characters and has a vocabulary size of 101. | 1611.01600#29 | Loss-aware Binarization of Deep Networks | Deep neural network models, though very powerful and highly successful, are
computationally expensive in terms of space and time. Recently, there have been
a number of attempts on binarizing the network weights and activations. This
greatly reduces the network size, and replaces the underlying multiplications
to additions or even XNOR bit operations. However, existing binarization
schemes are based on simple matrix approximation and ignore the effect of
binarization on the loss. In this paper, we propose a proximal Newton algorithm
with diagonal Hessian approximation that directly minimizes the loss w.r.t. the
binarized weights. The underlying proximal step has an efficient closed-form
solution, and the second-order information can be efficiently obtained from the
second moments already computed by the Adam optimizer. Experiments on both
feedforward and recurrent networks show that the proposed loss-aware
binarization algorithm outperforms existing binarization schemes, and is also
more robust for wide and deep networks. | http://arxiv.org/pdf/1611.01600 | Lu Hou, Quanming Yao, James T. Kwok | cs.NE, cs.LG | null | null | cs.NE | 20161105 | 20180510 | [
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1502.04390"
}
] |
1611.01626 | 29 | 0 < T*Q7(s,a) â Qâ¢(s,a) = TOF (s,a) âTQ*(s,a) = E, (max, Qre (sâ,c) -âdo, Tals! b)Qâ¢= (s', b)) = By Sy nals! dylimax, Q*(s!,c) â Q*(s!,0)) < Ey d7, exp((Q⢠(s', b) â Q* (s/, b*))/a) (max. Q**(s', c) â Q*(s',b)) Ey oy fa (max, Q7*(sâ,c) â Q7*(s',b)),
where we deï¬ne fα(x) = x exp(âx/α). To conclude our proof we use the fact that fα(x) ⤠supx fα(x) = fα(α) = αeâ1, which yields
0< T*Qâ¢(s,a) â Q**(s,a) < |Alaeâ¢* | 1611.01626#29 | Combining policy gradient and Q-learning | Policy gradient is an efficient technique for improving a policy in a
reinforcement learning setting. However, vanilla online variants are on-policy
only and not able to take advantage of off-policy data. In this paper we
describe a new technique that combines policy gradient with off-policy
Q-learning, drawing experience from a replay buffer. This is motivated by
making a connection between the fixed points of the regularized policy gradient
algorithm and the Q-values. This connection allows us to estimate the Q-values
from the action preferences of the policy, to which we apply Q-learning
updates. We refer to the new technique as 'PGQL', for policy gradient and
Q-learning. We also establish an equivalency between action-value fitting
techniques and actor-critic algorithms, showing that regularized policy
gradient techniques can be interpreted as advantage function learning
algorithms. We conclude with some numerical examples that demonstrate improved
data efficiency and stability of PGQL. In particular, we tested PGQL on the
full suite of Atari games and achieved performance exceeding that of both
asynchronous advantage actor-critic (A3C) and Q-learning. | http://arxiv.org/pdf/1611.01626 | Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih | cs.LG, cs.AI, math.OC, stat.ML | null | null | cs.LG | 20161105 | 20170407 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1609.00150"
},
{
"id": "1512.04105"
},
{
"id": "1511.05952"
},
{
"id": "1504.00702"
}
] |
1611.01673 | 29 | Tan Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor- mation Processing Systems, pp. 2672-2680, 2014.
Jonathan Ho and Stefano Ermon. Generative adversarial imitation learning. arXiv preprint arXiv: 1606.03476, 2016.
Daniel Jiwoong Im, Chris Dongjoo Kim, Hui Jiang, and Roland Memisevic. Generating images with recurrent adversarial networks. arXiv preprint arXiv: 1602.05110, 2016.
Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv: 1502.03167, 2015.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Alex Krizhevsky. Learning multiple layers of features from tiny images. Masterâs Thesis, 2009.
Yann LeCun, Corinna Cortes, and Christopher JC Burges. The mnist database of handwritten digits, 1998. | 1611.01673#29 | Generative Multi-Adversarial Networks | Generative adversarial networks (GANs) are a framework for producing a
generative model by way of a two-player minimax game. In this paper, we propose
the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that
extends GANs to multiple discriminators. In previous work, the successful
training of GANs requires modifying the minimax objective to accelerate
training early on. In contrast, GMAN can be reliably trained with the original,
untampered objective. We explore a number of design perspectives with the
discriminator role ranging from formidable adversary to forgiving teacher.
Image generation tasks comparing the proposed framework to standard GANs
demonstrate GMAN produces higher quality samples in a fraction of the
iterations when measured by a pairwise GAM-type metric. | http://arxiv.org/pdf/1611.01673 | Ishan Durugkar, Ian Gemp, Sridhar Mahadevan | cs.LG, cs.MA, cs.NE | Accepted as a conference paper (poster) at ICLR 2017 | null | cs.LG | 20161105 | 20170302 | [
{
"id": "1511.06390"
},
{
"id": "1511.05897"
},
{
"id": "1610.02920"
}
] |
1611.01576 | 30 | Exploring alternatives to traditional RNNs for sequence tasks is a major area of current research. Quasi-recurrent neural networks are related to several such recently described models, especially the strongly-typed recurrent neural networks (T-RNN) introduced by Balduzzi & Ghifary (2016). While the motivation and constraints described in that work are different, Balduzzi & Ghifary (2016)âs concepts of âlearnwareâ and âï¬rmwareâ parallel our discussion of convolution-like and pooling-like subcomponents. As the use of a fully connected layer for recurrent connections violates the con- straint of âstrong typingâ, all strongly-typed RNN architectures (including the T-RNN, T-GRU, and T-LSTM) are also quasi-recurrent. However, some QRNN models (including those with attention or skip-connections) are not âstrongly typedâ. In particular, a T-RNN differs from a QRNN as de- scribed in this paper with ï¬lter size 1 and f -pooling only in the absence of an activation function on z. Similarly, T-GRUs and T-LSTMs differ | 1611.01576#30 | Quasi-Recurrent Neural Networks | Recurrent neural networks are a powerful tool for modeling sequential data,
but the dependence of each timestep's computation on the previous timestep's
output limits parallelism and makes RNNs unwieldy for very long sequences. We
introduce quasi-recurrent neural networks (QRNNs), an approach to neural
sequence modeling that alternates convolutional layers, which apply in parallel
across timesteps, and a minimalist recurrent pooling function that applies in
parallel across channels. Despite lacking trainable recurrent layers, stacked
QRNNs have better predictive accuracy than stacked LSTMs of the same hidden
size. Due to their increased parallelism, they are up to 16 times faster at
train and test time. Experiments on language modeling, sentiment
classification, and character-level neural machine translation demonstrate
these advantages and underline the viability of QRNNs as a basic building block
for a variety of sequence tasks. | http://arxiv.org/pdf/1611.01576 | James Bradbury, Stephen Merity, Caiming Xiong, Richard Socher | cs.NE, cs.AI, cs.CL, cs.LG | Submitted to conference track at ICLR 2017 | null | cs.NE | 20161105 | 20161121 | [
{
"id": "1605.07725"
},
{
"id": "1508.06615"
},
{
"id": "1606.01305"
},
{
"id": "1610.10099"
},
{
"id": "1609.08144"
},
{
"id": "1602.00367"
},
{
"id": "1511.08630"
},
{
"id": "1609.07843"
},
{
"id": "1608.06993"
},
{
"id": "1610.03017"
},
{
"id": "1601.06759"
},
{
"id": "1606.02960"
}
] |
1611.01578 | 30 | Search space: Following Section 3.4, our controller sequentially predicts a combination method then an activation function for each node in the tree. For each node in the tree, the controller RNN needs to select a combination method in [add, elem mult] and an activation method in [identity, tanh, sigmoid, relu]. The number of input pairs to the RNN cell is called the âbase numberâ and set to 8 in our experiments. When the base number is 8, the search space is has ap- proximately 6 Ã 1016 architectures, which is much larger than 15,000, the number of architectures that we allow our controller to evaluate.
Training details: The controller and its training are almost identical to the CIFAR-10 experiments except for a few modiï¬cations: 1) the learning rate for the controller RNN is 0.0005, slightly smaller than that of the controller RNN in CIFAR-10, 2) in the distributed training, we set S to 20, K to 400 and m to 1, which means there are 400 networks being trained on 400 CPUs concurrently at any time, 3) during asynchronous training we only do parameter updates to the parameter-server once 10 gradients from replicas have been accumulated. | 1611.01578#30 | Neural Architecture Search with Reinforcement Learning | Neural networks are powerful and flexible models that work well for many
difficult learning tasks in image, speech and natural language understanding.
Despite their success, neural networks are still hard to design. In this paper,
we use a recurrent network to generate the model descriptions of neural
networks and train this RNN with reinforcement learning to maximize the
expected accuracy of the generated architectures on a validation set. On the
CIFAR-10 dataset, our method, starting from scratch, can design a novel network
architecture that rivals the best human-invented architecture in terms of test
set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is
0.09 percent better and 1.05x faster than the previous state-of-the-art model
that used a similar architectural scheme. On the Penn Treebank dataset, our
model can compose a novel recurrent cell that outperforms the widely-used LSTM
cell, and other state-of-the-art baselines. Our cell achieves a test set
perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than
the previous state-of-the-art model. The cell can also be transferred to the
character language modeling task on PTB and achieves a state-of-the-art
perplexity of 1.214. | http://arxiv.org/pdf/1611.01578 | Barret Zoph, Quoc V. Le | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20161105 | 20170215 | [
{
"id": "1611.01462"
},
{
"id": "1607.03474"
},
{
"id": "1603.05027"
},
{
"id": "1609.09106"
},
{
"id": "1511.06732"
},
{
"id": "1508.06615"
},
{
"id": "1606.04474"
},
{
"id": "1608.05859"
},
{
"id": "1609.08144"
},
{
"id": "1606.01885"
},
{
"id": "1505.00387"
},
{
"id": "1609.07843"
},
{
"id": "1512.05287"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1611.01600 | 30 | We use a one-layer LSTM with 512 cells. The maximum number of epochs is 200, and the number of time steps is 100. The initial learning rate is 0.002. After 10 epochs, it is decayed by a factor of 0.98 after each epoch. The weights are initialized uniformly in [0.08, 0.08]. After each iteration, the gradients are clipped to the range [â5, 5], and all the updated weights are clipped to [â1, 1]. For the weight-and-activation-binarized networks, we do not binarize the inputs, as they are one-hot vectors in this language modeling task.
Table 3 shows the testing cross-entropy values. As in Section 4.1, the proposed LAB outperforms other weight binarization schemes, and is even better than the full-precision network on the Linux Kernel data set. BinaryConnect does not work well here because of the problem of exploding gra- dients (see Section 3.2 and more results in Section 4.4). On the other hand, BWN and the proposed LAB scale the binary weight matrix and perform better. LAB also performs better than BWN as curvature information is considered. Similarly, among schemes that binarize both weights and acti- vations, the proposed LAB2 also outperforms BNN and XNOR-Network. | 1611.01600#30 | Loss-aware Binarization of Deep Networks | Deep neural network models, though very powerful and highly successful, are
computationally expensive in terms of space and time. Recently, there have been
a number of attempts on binarizing the network weights and activations. This
greatly reduces the network size, and replaces the underlying multiplications
to additions or even XNOR bit operations. However, existing binarization
schemes are based on simple matrix approximation and ignore the effect of
binarization on the loss. In this paper, we propose a proximal Newton algorithm
with diagonal Hessian approximation that directly minimizes the loss w.r.t. the
binarized weights. The underlying proximal step has an efficient closed-form
solution, and the second-order information can be efficiently obtained from the
second moments already computed by the Adam optimizer. Experiments on both
feedforward and recurrent networks show that the proposed loss-aware
binarization algorithm outperforms existing binarization schemes, and is also
more robust for wide and deep networks. | http://arxiv.org/pdf/1611.01600 | Lu Hou, Quanming Yao, James T. Kwok | cs.NE, cs.LG | null | null | cs.NE | 20161105 | 20180510 | [
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1502.04390"
}
] |
1611.01603 | 30 | Table 2: Closest context words to a given query word, using a cosine similarity metric computed in the Word Embedding feature space and the Phrase Embedding feature space.
_15,___Word Embed Space 1s,___ Phrase Embed Space Questions answered correctly by our BIDAF model = sor ows and the more traditional baseline model tol what (4752) May how (1090)} 7 @ Sk from 28 January to 29 ~ may | but by September had beén who (1061) 5 _os debut on May 5 when (696 B 3 Opening in May 1852 at whieh (654) a -19] 509 3734 3585 in taal w â39] â Januay 5 of these may be moreâ where (433) | seston Hy | 2577 ann mes as on (aa) August . Baseline toa -a0) -39) âto -5 0 5 10 15 20 25 24 26 28 30 32 34 36 38 40 42 BIDAF | ep t-SNE Dimension 1 t-SNE Dimension 1 âof questions witn correct answers {a) (b) (c) | 1611.01603#30 | Bidirectional Attention Flow for Machine Comprehension | Machine comprehension (MC), answering a query about a given context
paragraph, requires modeling complex interactions between the context and the
query. Recently, attention mechanisms have been successfully extended to MC.
Typically these methods use attention to focus on a small portion of the
context and summarize it with a fixed-size vector, couple attentions
temporally, and/or often form a uni-directional attention. In this paper we
introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage
hierarchical process that represents the context at different levels of
granularity and uses bi-directional attention flow mechanism to obtain a
query-aware context representation without early summarization. Our
experimental evaluations show that our model achieves the state-of-the-art
results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze
test. | http://arxiv.org/pdf/1611.01603 | Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi | cs.CL | Published as a conference paper at ICLR 2017 | null | cs.CL | 20161105 | 20180621 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1611.01604"
},
{
"id": "1609.05284"
},
{
"id": "1610.09996"
},
{
"id": "1606.01549"
},
{
"id": "1511.02274"
},
{
"id": "1505.00387"
},
{
"id": "1611.01436"
},
{
"id": "1611.01724"
},
{
"id": "1607.04423"
}
] |
1611.01626 | 30 | 0< T*Qâ¢(s,a) â Q**(s,a) < |Alaeâ¢*
for all (s,a), and so the Bellman residual converges to zero with decreasing a. In other words, for small enough a (which is the regime we are interested in) the Q-values induced by the policy will have a small Bellman residual. Moreover, this implies that limy_,9 Q7* = Q*, as one might expect.
# 4 PGQL
In this section we introduce the main contribution of the paper, which is a technique to combine pol- icy gradient with Q-learning. We call our technique âPGQLâ, for policy gradient and Q-learning. In the previous section we showed that the Bellman residual is small at the ï¬xed point of a regularized
6
Published as a conference paper at ICLR 2017 | 1611.01626#30 | Combining policy gradient and Q-learning | Policy gradient is an efficient technique for improving a policy in a
reinforcement learning setting. However, vanilla online variants are on-policy
only and not able to take advantage of off-policy data. In this paper we
describe a new technique that combines policy gradient with off-policy
Q-learning, drawing experience from a replay buffer. This is motivated by
making a connection between the fixed points of the regularized policy gradient
algorithm and the Q-values. This connection allows us to estimate the Q-values
from the action preferences of the policy, to which we apply Q-learning
updates. We refer to the new technique as 'PGQL', for policy gradient and
Q-learning. We also establish an equivalency between action-value fitting
techniques and actor-critic algorithms, showing that regularized policy
gradient techniques can be interpreted as advantage function learning
algorithms. We conclude with some numerical examples that demonstrate improved
data efficiency and stability of PGQL. In particular, we tested PGQL on the
full suite of Atari games and achieved performance exceeding that of both
asynchronous advantage actor-critic (A3C) and Q-learning. | http://arxiv.org/pdf/1611.01626 | Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih | cs.LG, cs.AI, math.OC, stat.ML | null | null | cs.LG | 20161105 | 20170407 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1609.00150"
},
{
"id": "1512.04105"
},
{
"id": "1511.05952"
},
{
"id": "1504.00702"
}
] |
1611.01673 | 30 | Yann LeCun, Corinna Cortes, and Christopher JC Burges. The mnist database of handwritten digits, 1998.
Yujia Li, Kevin Swersky, and Richard Zemel. Generative moment matching networks. In Jnterna- tional Conference on Machine Learning, pp. 1718-1727, 2015.
Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), December 2015.
Published as a conference paper at ICLR 2017
Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, and Ian Goodfellow. Adversarial autoencoders. arXiv preprint arXiv: 1511.05644, 2015.
Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014.
Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. f-gan: Training generative neural samplers using variational divergence minimization. arXiv preprint arXiv: 1606.00709, 2016. | 1611.01673#30 | Generative Multi-Adversarial Networks | Generative adversarial networks (GANs) are a framework for producing a
generative model by way of a two-player minimax game. In this paper, we propose
the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that
extends GANs to multiple discriminators. In previous work, the successful
training of GANs requires modifying the minimax objective to accelerate
training early on. In contrast, GMAN can be reliably trained with the original,
untampered objective. We explore a number of design perspectives with the
discriminator role ranging from formidable adversary to forgiving teacher.
Image generation tasks comparing the proposed framework to standard GANs
demonstrate GMAN produces higher quality samples in a fraction of the
iterations when measured by a pairwise GAM-type metric. | http://arxiv.org/pdf/1611.01673 | Ishan Durugkar, Ian Gemp, Sridhar Mahadevan | cs.LG, cs.MA, cs.NE | Accepted as a conference paper (poster) at ICLR 2017 | null | cs.LG | 20161105 | 20170302 | [
{
"id": "1511.06390"
},
{
"id": "1511.05897"
},
{
"id": "1610.02920"
}
] |
1611.01578 | 31 | In our experiments, every child model is constructed and trained for 35 epochs. Every child model has two layers, with the number of hidden units adjusted so that total number of learnable parameters approximately match the âmediumâ baselines (Zaremba et al., 2014; Gal, 2015). In these experi- ments we only have the controller predict the RNN cell structure and ï¬x all other hyperparameters. The reward function is
After the controller RNN is done training, we take the best RNN cell according to the lowest val- idation perplexity and then run a grid search over learning rate, weight initialization, dropout rates
8
# Under review as a conference paper at ICLR 2017
and decay epoch. The best cell found was then run with three different conï¬gurations and sizes to increase its capacity.
Results: In Table 2, we provide a comprehensive list of architectures and their performance on the PTB dataset. As can be seen from the table, the models found by Neural Architecture Search outperform other state-of-the-art models on this dataset, and one of our best models achieves a gain of almost 3.6 perplexity. Not only is our cell is better, the model that achieves 64 perplexity is also more than two times faster because the previous best network requires running a cell 10 times per time step (Zilly et al., 2016). | 1611.01578#31 | Neural Architecture Search with Reinforcement Learning | Neural networks are powerful and flexible models that work well for many
difficult learning tasks in image, speech and natural language understanding.
Despite their success, neural networks are still hard to design. In this paper,
we use a recurrent network to generate the model descriptions of neural
networks and train this RNN with reinforcement learning to maximize the
expected accuracy of the generated architectures on a validation set. On the
CIFAR-10 dataset, our method, starting from scratch, can design a novel network
architecture that rivals the best human-invented architecture in terms of test
set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is
0.09 percent better and 1.05x faster than the previous state-of-the-art model
that used a similar architectural scheme. On the Penn Treebank dataset, our
model can compose a novel recurrent cell that outperforms the widely-used LSTM
cell, and other state-of-the-art baselines. Our cell achieves a test set
perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than
the previous state-of-the-art model. The cell can also be transferred to the
character language modeling task on PTB and achieves a state-of-the-art
perplexity of 1.214. | http://arxiv.org/pdf/1611.01578 | Barret Zoph, Quoc V. Le | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20161105 | 20170215 | [
{
"id": "1611.01462"
},
{
"id": "1607.03474"
},
{
"id": "1603.05027"
},
{
"id": "1609.09106"
},
{
"id": "1511.06732"
},
{
"id": "1508.06615"
},
{
"id": "1606.04474"
},
{
"id": "1608.05859"
},
{
"id": "1609.08144"
},
{
"id": "1606.01885"
},
{
"id": "1505.00387"
},
{
"id": "1609.07843"
},
{
"id": "1512.05287"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1611.01600 | 31 | 4.4 VARYING THE NUMBER OF TIME STEPS IN LSTM
In this experiment, we study the sensitivity of the binarization schemes with varying numbers of unrolled time steps (T S) in LSTM. Results are shown in Table 4. Again, the proposed LAB has the best performance. When T S = 10, the LSTM is relatively shallow, and all binarization schemes have similar performance as the full-precision network. When T S ⥠50, BinaryConnect fails,
7
Published as a conference paper at ICLR 2017
Table 3: Testing cross-entropy values of LSTM.
(no binarization) (binarize weights) (binarize weights and activations) full-precision BinaryConnect BWN LAB BNN XNOR LAB2 War and Peace 1.268 2.942 1.313 1.291 3.050 1.424 1.376 Linux Kernel 1.329 3.532 1.307 1.305 3.624 1.426 1.409 | 1611.01600#31 | Loss-aware Binarization of Deep Networks | Deep neural network models, though very powerful and highly successful, are
computationally expensive in terms of space and time. Recently, there have been
a number of attempts on binarizing the network weights and activations. This
greatly reduces the network size, and replaces the underlying multiplications
to additions or even XNOR bit operations. However, existing binarization
schemes are based on simple matrix approximation and ignore the effect of
binarization on the loss. In this paper, we propose a proximal Newton algorithm
with diagonal Hessian approximation that directly minimizes the loss w.r.t. the
binarized weights. The underlying proximal step has an efficient closed-form
solution, and the second-order information can be efficiently obtained from the
second moments already computed by the Adam optimizer. Experiments on both
feedforward and recurrent networks show that the proposed loss-aware
binarization algorithm outperforms existing binarization schemes, and is also
more robust for wide and deep networks. | http://arxiv.org/pdf/1611.01600 | Lu Hou, Quanming Yao, James T. Kwok | cs.NE, cs.LG | null | null | cs.NE | 20161105 | 20180510 | [
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1502.04390"
}
] |
1611.01603 | 31 | Figure 2: (a) t-SNE visualizations of the months names embedded in the two feature spaces. The contextual embedding layer is able to distinguish the two usages of the word May using context from the surrounding text. (b) Venn diagram of the questions answered correctly by our model and the more traditional baseline (Rajpurkar et al., 2016). (c) Correctly answered questions broken down by the 10 most frequent ï¬rst words in the question.
Visualizations. We now provide a qualitative analysis of our model on the SQuAD dev set. First, we visualize the feature spaces after the word and contextual embedding layers. These two layers are responsible for aligning the embeddings between the query and context words which are the inputs to the subsequent attention layer. To visualize the embeddings, we choose a few frequent query words in the dev data and look at the context words that have the highest cosine similarity to the query words (Table 2). At the word embedding layer, query words such as When, Where and Who are not well aligned to possible answers in the context, but this dramatically changes in the contextual embedding layer which has access to context from surrounding words and is just 1 layer below the attention layer. When begins to match years, Where matches locations, and Who matches names. | 1611.01603#31 | Bidirectional Attention Flow for Machine Comprehension | Machine comprehension (MC), answering a query about a given context
paragraph, requires modeling complex interactions between the context and the
query. Recently, attention mechanisms have been successfully extended to MC.
Typically these methods use attention to focus on a small portion of the
context and summarize it with a fixed-size vector, couple attentions
temporally, and/or often form a uni-directional attention. In this paper we
introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage
hierarchical process that represents the context at different levels of
granularity and uses bi-directional attention flow mechanism to obtain a
query-aware context representation without early summarization. Our
experimental evaluations show that our model achieves the state-of-the-art
results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze
test. | http://arxiv.org/pdf/1611.01603 | Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi | cs.CL | Published as a conference paper at ICLR 2017 | null | cs.CL | 20161105 | 20180621 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1611.01604"
},
{
"id": "1609.05284"
},
{
"id": "1610.09996"
},
{
"id": "1606.01549"
},
{
"id": "1511.02274"
},
{
"id": "1505.00387"
},
{
"id": "1611.01436"
},
{
"id": "1611.01724"
},
{
"id": "1607.04423"
}
] |
1611.01626 | 31 | 6
Published as a conference paper at ICLR 2017
policy gradient algorithm when the regularization penalty is sufï¬ciently small. This suggests adding an auxiliary update where we explicitly attempt to reduce the Bellman residual as estimated from the policy, i.e., a hybrid between policy gradient and Q-learning. We ï¬rst present the technique in a batch update setting, with a perfect knowledge of QÏ (i.e., a perfect critic). Later we discuss the practical implementation of the technique in a reinforcement learning setting with function approximation, where the agent generates experience from interacting with the environment and needs to estimate a critic simultaneously with the policy.
4.1 PGQL UPDATE
Deï¬ne the estimate of Q using the policy as
ËQÏ(s, a) = α(log Ï(s, a) + H Ï(s)) + V (s), (12) where V has parameters w and is not necessarily V Ï as it was in equation (5). In (2) it was unneces- sary to estimate the constant since the update was invariant to constant offsets, although in practice it is often estimated for use in a variance reduction technique (Williams, 1992; Sutton et al., 1999). | 1611.01626#31 | Combining policy gradient and Q-learning | Policy gradient is an efficient technique for improving a policy in a
reinforcement learning setting. However, vanilla online variants are on-policy
only and not able to take advantage of off-policy data. In this paper we
describe a new technique that combines policy gradient with off-policy
Q-learning, drawing experience from a replay buffer. This is motivated by
making a connection between the fixed points of the regularized policy gradient
algorithm and the Q-values. This connection allows us to estimate the Q-values
from the action preferences of the policy, to which we apply Q-learning
updates. We refer to the new technique as 'PGQL', for policy gradient and
Q-learning. We also establish an equivalency between action-value fitting
techniques and actor-critic algorithms, showing that regularized policy
gradient techniques can be interpreted as advantage function learning
algorithms. We conclude with some numerical examples that demonstrate improved
data efficiency and stability of PGQL. In particular, we tested PGQL on the
full suite of Atari games and achieved performance exceeding that of both
asynchronous advantage actor-critic (A3C) and Q-learning. | http://arxiv.org/pdf/1611.01626 | Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih | cs.LG, cs.AI, math.OC, stat.ML | null | null | cs.LG | 20161105 | 20170407 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1609.00150"
},
{
"id": "1512.04105"
},
{
"id": "1511.05952"
},
{
"id": "1504.00702"
}
] |
1611.01673 | 31 | Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv: 1511.06434, 2015.
Siamak Ravanbakhsh, Francois Lanusse, Rachel Mandelbaum, Jeff Schneider, and Barnabas Poczos. Enabling dark energy science with deep generative models of galaxy images. arXiv preprint arXiv: 1609.05796, 2016.
Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. arXiv preprint arXiv: 1606.03498, 2016.
Jiirgen Schmidhuber. Learning factorial codes by predictability minimization. Neural Computation, 4(6):863-879, 1992.
Jost Tobias Springenberg. Unsupervised and semi-supervised learning with categorical generative adversarial networks. arXiv preprint arXiv:1511.06390, 2015.
Lucas Theis, Adron van den Oord, and Matthias Bethge. A note on the evaluation of generative models. arXiv preprint arXiv: 1511.01844v3, 2016. | 1611.01673#31 | Generative Multi-Adversarial Networks | Generative adversarial networks (GANs) are a framework for producing a
generative model by way of a two-player minimax game. In this paper, we propose
the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that
extends GANs to multiple discriminators. In previous work, the successful
training of GANs requires modifying the minimax objective to accelerate
training early on. In contrast, GMAN can be reliably trained with the original,
untampered objective. We explore a number of design perspectives with the
discriminator role ranging from formidable adversary to forgiving teacher.
Image generation tasks comparing the proposed framework to standard GANs
demonstrate GMAN produces higher quality samples in a fraction of the
iterations when measured by a pairwise GAM-type metric. | http://arxiv.org/pdf/1611.01673 | Ishan Durugkar, Ian Gemp, Sridhar Mahadevan | cs.LG, cs.MA, cs.NE | Accepted as a conference paper (poster) at ICLR 2017 | null | cs.LG | 20161105 | 20170302 | [
{
"id": "1511.06390"
},
{
"id": "1511.05897"
},
{
"id": "1610.02920"
}
] |
1611.01576 | 32 | The QRNN is also related to work in hybrid convolutionalârecurrent models. Zhou et al. (2015) apply CNNs at the word level to generate n-gram features used by an LSTM for text classiï¬cation. Xiao & Cho (2016) also tackle text classiï¬cation by applying convolutions at the character level, with a stride to reduce sequence length, then feeding these features into a bidirectional LSTM. A similar approach was taken by Lee et al. (2016) for character-level machine translation. Their modelâs encoder uses a convolutional layer followed by max-pooling to reduce sequence length, a four-layer highway network, and a bidirectional GRU. The parallelism of the convolutional, pooling, and highway layers allows training speed comparable to subword-level models without hard-coded text segmentation.
The QRNN encoderâdecoder model shares the favorable parallelism and path-length properties ex- hibited by the ByteNet (Kalchbrenner et al., 2016), an architecture for character-level machine trans- lation based on residual convolutions over binary trees. Their model was constructed to achieve three desired properties: parallelism, linear-time computational complexity, and short paths between any pair of words in order to better propagate gradient signals.
# 5 CONCLUSION | 1611.01576#32 | Quasi-Recurrent Neural Networks | Recurrent neural networks are a powerful tool for modeling sequential data,
but the dependence of each timestep's computation on the previous timestep's
output limits parallelism and makes RNNs unwieldy for very long sequences. We
introduce quasi-recurrent neural networks (QRNNs), an approach to neural
sequence modeling that alternates convolutional layers, which apply in parallel
across timesteps, and a minimalist recurrent pooling function that applies in
parallel across channels. Despite lacking trainable recurrent layers, stacked
QRNNs have better predictive accuracy than stacked LSTMs of the same hidden
size. Due to their increased parallelism, they are up to 16 times faster at
train and test time. Experiments on language modeling, sentiment
classification, and character-level neural machine translation demonstrate
these advantages and underline the viability of QRNNs as a basic building block
for a variety of sequence tasks. | http://arxiv.org/pdf/1611.01576 | James Bradbury, Stephen Merity, Caiming Xiong, Richard Socher | cs.NE, cs.AI, cs.CL, cs.LG | Submitted to conference track at ICLR 2017 | null | cs.NE | 20161105 | 20161121 | [
{
"id": "1605.07725"
},
{
"id": "1508.06615"
},
{
"id": "1606.01305"
},
{
"id": "1610.10099"
},
{
"id": "1609.08144"
},
{
"id": "1602.00367"
},
{
"id": "1511.08630"
},
{
"id": "1609.07843"
},
{
"id": "1608.06993"
},
{
"id": "1610.03017"
},
{
"id": "1601.06759"
},
{
"id": "1606.02960"
}
] |
1611.01578 | 32 | Model Parameters Test Perplexity Mikolov & Zweig (2012) - KN-5 Mikolov & Zweig (2012) - KN5 + cache Mikolov & Zweig (2012) - RNN Mikolov & Zweig (2012) - RNN-LDA Mikolov & Zweig (2012) - RNN-LDA + KN-5 + cache Pascanu et al. (2013) - Deep RNN Cheng et al. (2014) - Sum-Prod Net Zaremba et al. (2014) - LSTM (medium) Zaremba et al. (2014) - LSTM (large) Gal (2015) - Variational LSTM (medium, untied) Gal (2015) - Variational LSTM (medium, untied, MC) Gal (2015) - Variational LSTM (large, untied) Gal (2015) - Variational LSTM (large, untied, MC) Kim et al. (2015) - CharCNN Press & Wolf (2016) - Variational LSTM, shared embeddings Merity et al. (2016) - Zoneout + Variational LSTM (medium) Merity et al. (2016) - Pointer Sentinel-LSTM (medium) Inan et al. (2016) - | 1611.01578#32 | Neural Architecture Search with Reinforcement Learning | Neural networks are powerful and flexible models that work well for many
difficult learning tasks in image, speech and natural language understanding.
Despite their success, neural networks are still hard to design. In this paper,
we use a recurrent network to generate the model descriptions of neural
networks and train this RNN with reinforcement learning to maximize the
expected accuracy of the generated architectures on a validation set. On the
CIFAR-10 dataset, our method, starting from scratch, can design a novel network
architecture that rivals the best human-invented architecture in terms of test
set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is
0.09 percent better and 1.05x faster than the previous state-of-the-art model
that used a similar architectural scheme. On the Penn Treebank dataset, our
model can compose a novel recurrent cell that outperforms the widely-used LSTM
cell, and other state-of-the-art baselines. Our cell achieves a test set
perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than
the previous state-of-the-art model. The cell can also be transferred to the
character language modeling task on PTB and achieves a state-of-the-art
perplexity of 1.214. | http://arxiv.org/pdf/1611.01578 | Barret Zoph, Quoc V. Le | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20161105 | 20170215 | [
{
"id": "1611.01462"
},
{
"id": "1607.03474"
},
{
"id": "1603.05027"
},
{
"id": "1609.09106"
},
{
"id": "1511.06732"
},
{
"id": "1508.06615"
},
{
"id": "1606.04474"
},
{
"id": "1608.05859"
},
{
"id": "1609.08144"
},
{
"id": "1606.01885"
},
{
"id": "1505.00387"
},
{
"id": "1609.07843"
},
{
"id": "1512.05287"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1611.01600 | 32 | while BWN and the proposed LAB perform better (as discussed in Section 3.2). Figure 2 shows the distributions of the hidden-to-hidden weight gradients for T S = 10 and 100. As can be seen, while all models have similar gradient distributions at T S = 10, the gradient values in BinaryConnect are much higher than those of the other algorithms for the deeper network (T S = 100).
Table 4: Testing cross-entropy on War and Peace, for LSTMs with different time steps (T S). Differ- ence between cross-entropies of binarized scheme and full-precision network is shown in brackets. T S = 50 1.310 2.980 (1.670) 1.325 (0.015) 1.324 (0.014)
T S = 10 1.527 1.528 (0.001) 1.532 (0.005) 1.527 (0.000) T S = 100 1.268 2.942 (1.674) 1.313 (0.045) 1.291 (0.023) full-precision BinaryConnect BWN LAB (a) T S = 10. (b) T S = 100.
T S = 150 1.249 2.872 (1.623) 1.311 (0.062) 1.285 (0.036) | 1611.01600#32 | Loss-aware Binarization of Deep Networks | Deep neural network models, though very powerful and highly successful, are
computationally expensive in terms of space and time. Recently, there have been
a number of attempts on binarizing the network weights and activations. This
greatly reduces the network size, and replaces the underlying multiplications
to additions or even XNOR bit operations. However, existing binarization
schemes are based on simple matrix approximation and ignore the effect of
binarization on the loss. In this paper, we propose a proximal Newton algorithm
with diagonal Hessian approximation that directly minimizes the loss w.r.t. the
binarized weights. The underlying proximal step has an efficient closed-form
solution, and the second-order information can be efficiently obtained from the
second moments already computed by the Adam optimizer. Experiments on both
feedforward and recurrent networks show that the proposed loss-aware
binarization algorithm outperforms existing binarization schemes, and is also
more robust for wide and deep networks. | http://arxiv.org/pdf/1611.01600 | Lu Hou, Quanming Yao, James T. Kwok | cs.NE, cs.LG | null | null | cs.NE | 20161105 | 20180510 | [
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1502.04390"
}
] |
1611.01603 | 32 | We also visualize these two feature spaces using t-SNE in Figure 2. t-SNE is performed on a large fraction of dev data but we only plot data points corresponding to the months of the year. An interesting pattern emerges in the Word space, where May is separated from the rest of the months because May has multiple meanings in the English language. The contextual embedding layer uses contextual cues from surrounding words and is able to separate the usages of the word May. Finally we visualize the attention matrices for some question-context tuples in the dev data in Figure 3. In the ï¬rst example, Where matches locations and in the second example, many matches quantities and numerical symbols. Also, entities in the question typically attend to the same entities in the context, thus providing a feature for the model to localize possible answers.
Discussions. We analyse the performance of our our model with a traditional language-feature- based baseline (Rajpurkar et al., 2016). Figure 2b shows a Venn diagram of the dev set questions correctly answered by the models. Our model is able to answer more than 86% of the questions
7
Published as a conference paper at ICLR 2017 | 1611.01603#32 | Bidirectional Attention Flow for Machine Comprehension | Machine comprehension (MC), answering a query about a given context
paragraph, requires modeling complex interactions between the context and the
query. Recently, attention mechanisms have been successfully extended to MC.
Typically these methods use attention to focus on a small portion of the
context and summarize it with a fixed-size vector, couple attentions
temporally, and/or often form a uni-directional attention. In this paper we
introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage
hierarchical process that represents the context at different levels of
granularity and uses bi-directional attention flow mechanism to obtain a
query-aware context representation without early summarization. Our
experimental evaluations show that our model achieves the state-of-the-art
results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze
test. | http://arxiv.org/pdf/1611.01603 | Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi | cs.CL | Published as a conference paper at ICLR 2017 | null | cs.CL | 20161105 | 20180621 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1611.01604"
},
{
"id": "1609.05284"
},
{
"id": "1610.09996"
},
{
"id": "1606.01549"
},
{
"id": "1511.02274"
},
{
"id": "1505.00387"
},
{
"id": "1611.01436"
},
{
"id": "1611.01724"
},
{
"id": "1607.04423"
}
] |
1611.01626 | 32 | Since we know that at the ï¬xed point the Bellman residual will be small for small α, we can consider updating the parameters to reduce the Bellman residual in a fashion similar to Q-learning, i.e.,
Aé x E(T*Q"(s,a) â Q"(s,a))Vologn(s,a), Aw x E(7*Q"(s,a) â Q7(s,a))VwV(s). sa s,a (13) This is Q-learning applied to a particular form of the Q-values, and can also be interpreted as an actor-critic algorithm with an optimizing (and therefore biased) critic.
The full scheme simply combines two updates to the policy, the regularized policy gradient update (2) and the Q-learning update (13). Assuming we have an architecture that provides a policy Ï, a value function estimate V , and an action-value critic QÏ, then the parameter updates can be written as (suppressing the (s, a) notation)
AO x (1 =) Ex,a(Q⢠â Q") Vo log 7 + 7 Es,a(T*Q" â Q7)Vo log, (14) Aw « (1 = 1) Es,a(Qâ â Q")VuV + 1 Es.a(T*Q" â Q7)VuV, | 1611.01626#32 | Combining policy gradient and Q-learning | Policy gradient is an efficient technique for improving a policy in a
reinforcement learning setting. However, vanilla online variants are on-policy
only and not able to take advantage of off-policy data. In this paper we
describe a new technique that combines policy gradient with off-policy
Q-learning, drawing experience from a replay buffer. This is motivated by
making a connection between the fixed points of the regularized policy gradient
algorithm and the Q-values. This connection allows us to estimate the Q-values
from the action preferences of the policy, to which we apply Q-learning
updates. We refer to the new technique as 'PGQL', for policy gradient and
Q-learning. We also establish an equivalency between action-value fitting
techniques and actor-critic algorithms, showing that regularized policy
gradient techniques can be interpreted as advantage function learning
algorithms. We conclude with some numerical examples that demonstrate improved
data efficiency and stability of PGQL. In particular, we tested PGQL on the
full suite of Atari games and achieved performance exceeding that of both
asynchronous advantage actor-critic (A3C) and Q-learning. | http://arxiv.org/pdf/1611.01626 | Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih | cs.LG, cs.AI, math.OC, stat.ML | null | null | cs.LG | 20161105 | 20170407 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1609.00150"
},
{
"id": "1512.04105"
},
{
"id": "1511.05952"
},
{
"id": "1504.00702"
}
] |
1611.01673 | 32 | Masatoshi Uehara, Issei Sato, Masahiro Suzuki, Kotaro Nakayama, and Yutaka Matsuo. Generative adversarial nets from a density ratio estimation perspective. arXiv preprint arXiv:1610.02920, 2016.
Donggeun Yoo, Namil Kim, Sunggyun Park, Anthony S Paek, and In So Kweon. Pixel-level domain transfer. arXiv preprint arXiv: 1603.07442, 2016.
Matthew D Zeiler, Dilip Krishnan, Graham W Taylor, and Rob Fergus. Deconvolutional networks. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pp. 2528-2535. IEEE, 2010.
Junbo Zhao, Michael Mathieu, and Yann LeCun. Energy-based generative adversarial network. arXiv preprint arXiv: 1609.03126, 2016.
10 | 1611.01673#32 | Generative Multi-Adversarial Networks | Generative adversarial networks (GANs) are a framework for producing a
generative model by way of a two-player minimax game. In this paper, we propose
the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that
extends GANs to multiple discriminators. In previous work, the successful
training of GANs requires modifying the minimax objective to accelerate
training early on. In contrast, GMAN can be reliably trained with the original,
untampered objective. We explore a number of design perspectives with the
discriminator role ranging from formidable adversary to forgiving teacher.
Image generation tasks comparing the proposed framework to standard GANs
demonstrate GMAN produces higher quality samples in a fraction of the
iterations when measured by a pairwise GAM-type metric. | http://arxiv.org/pdf/1611.01673 | Ishan Durugkar, Ian Gemp, Sridhar Mahadevan | cs.LG, cs.MA, cs.NE | Accepted as a conference paper (poster) at ICLR 2017 | null | cs.LG | 20161105 | 20170302 | [
{
"id": "1511.06390"
},
{
"id": "1511.05897"
},
{
"id": "1610.02920"
}
] |
1611.01576 | 33 | # 5 CONCLUSION
Intuitively, many aspects of the semantics of long sequences are context-invariant and can be com- puted in parallel (e.g., convolutionally), but some aspects require long-distance context and must be computed recurrently. Many existing neural network architectures either fail to take advantage of the contextual information or fail to take advantage of the parallelism. QRNNs exploit both parallelism and context, exhibiting advantages from both convolutional and recurrent neural networks. QRNNs have better predictive accuracy than LSTM-based models of equal hidden size, even though they use fewer parameters and run substantially faster. Our experiments show that the speed and accuracy advantages remain consistent across tasks and at both word and character levels.
Extensions to both CNNs and RNNs are often directly applicable to the QRNN, while the modelâs hidden states are more interpretable than those of other recurrent architectures as its channels main- tain their independence across timesteps. We believe that QRNNs can serve as a building block for long-sequence tasks that were previously impractical with traditional RNNs.
8
# Under review as a conference paper at ICLR 2017
# REFERENCES
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In ICLR, 2015. | 1611.01576#33 | Quasi-Recurrent Neural Networks | Recurrent neural networks are a powerful tool for modeling sequential data,
but the dependence of each timestep's computation on the previous timestep's
output limits parallelism and makes RNNs unwieldy for very long sequences. We
introduce quasi-recurrent neural networks (QRNNs), an approach to neural
sequence modeling that alternates convolutional layers, which apply in parallel
across timesteps, and a minimalist recurrent pooling function that applies in
parallel across channels. Despite lacking trainable recurrent layers, stacked
QRNNs have better predictive accuracy than stacked LSTMs of the same hidden
size. Due to their increased parallelism, they are up to 16 times faster at
train and test time. Experiments on language modeling, sentiment
classification, and character-level neural machine translation demonstrate
these advantages and underline the viability of QRNNs as a basic building block
for a variety of sequence tasks. | http://arxiv.org/pdf/1611.01576 | James Bradbury, Stephen Merity, Caiming Xiong, Richard Socher | cs.NE, cs.AI, cs.CL, cs.LG | Submitted to conference track at ICLR 2017 | null | cs.NE | 20161105 | 20161121 | [
{
"id": "1605.07725"
},
{
"id": "1508.06615"
},
{
"id": "1606.01305"
},
{
"id": "1610.10099"
},
{
"id": "1609.08144"
},
{
"id": "1602.00367"
},
{
"id": "1511.08630"
},
{
"id": "1609.07843"
},
{
"id": "1608.06993"
},
{
"id": "1610.03017"
},
{
"id": "1601.06759"
},
{
"id": "1606.02960"
}
] |
1611.01578 | 33 | Zoneout + Variational LSTM (medium) Merity et al. (2016) - Pointer Sentinel-LSTM (medium) Inan et al. (2016) - VD-LSTM + REAL (large) Zilly et al. (2016) - Variational RHN, shared embeddings 2Mâ¡ 2Mâ¡ 6Mâ¡ 7Mâ¡ 9Mâ¡ 6M 5Mâ¡ 20M 66M 20M 20M 66M 66M 19M 51M 20M 21M 51M 24M 141.2 125.7 124.7 113.7 92.0 107.5 100.0 82.7 78.4 79.7 78.6 75.2 73.4 78.9 73.2 80.6 70.9 68.5 66.0 Neural Architecture Search with base 8 Neural Architecture Search with base 8 and shared embeddings Neural Architecture Search with base 8 and shared embeddings 32M 25M 54M 67.9 64.0 62.4 | 1611.01578#33 | Neural Architecture Search with Reinforcement Learning | Neural networks are powerful and flexible models that work well for many
difficult learning tasks in image, speech and natural language understanding.
Despite their success, neural networks are still hard to design. In this paper,
we use a recurrent network to generate the model descriptions of neural
networks and train this RNN with reinforcement learning to maximize the
expected accuracy of the generated architectures on a validation set. On the
CIFAR-10 dataset, our method, starting from scratch, can design a novel network
architecture that rivals the best human-invented architecture in terms of test
set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is
0.09 percent better and 1.05x faster than the previous state-of-the-art model
that used a similar architectural scheme. On the Penn Treebank dataset, our
model can compose a novel recurrent cell that outperforms the widely-used LSTM
cell, and other state-of-the-art baselines. Our cell achieves a test set
perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than
the previous state-of-the-art model. The cell can also be transferred to the
character language modeling task on PTB and achieves a state-of-the-art
perplexity of 1.214. | http://arxiv.org/pdf/1611.01578 | Barret Zoph, Quoc V. Le | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20161105 | 20170215 | [
{
"id": "1611.01462"
},
{
"id": "1607.03474"
},
{
"id": "1603.05027"
},
{
"id": "1609.09106"
},
{
"id": "1511.06732"
},
{
"id": "1508.06615"
},
{
"id": "1606.04474"
},
{
"id": "1608.05859"
},
{
"id": "1609.08144"
},
{
"id": "1606.01885"
},
{
"id": "1505.00387"
},
{
"id": "1609.07843"
},
{
"id": "1512.05287"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1611.01600 | 33 | T S = 150 1.249 2.872 (1.623) 1.311 (0.062) 1.285 (0.036)
100% . . : . [Miut-precision inaryConnect (awn (Has Ei percentage of elements 10 409 10° 107 10% 10° 10% 10° 107 10° 10° 10° 10° âgradient magnitude
100% [Miut-precision ; IN bial percentage of elements 10" 10 10° 107 10° 10° 10% 10° 10? 107 10° 10° 10? âgradient magnitude
Figure 2: Distribution of weight gradients on War and Peace, for LSTMs with different time steps.
Note from Table 4 that as the time step increases, all except BinaryConnect show better performance. However, degradation due to binarization also becomes more severe. This is because the weights are shared across time steps. Hence, error due to binarization also propagates across time.
# 5 CONCLUSION | 1611.01600#33 | Loss-aware Binarization of Deep Networks | Deep neural network models, though very powerful and highly successful, are
computationally expensive in terms of space and time. Recently, there have been
a number of attempts on binarizing the network weights and activations. This
greatly reduces the network size, and replaces the underlying multiplications
to additions or even XNOR bit operations. However, existing binarization
schemes are based on simple matrix approximation and ignore the effect of
binarization on the loss. In this paper, we propose a proximal Newton algorithm
with diagonal Hessian approximation that directly minimizes the loss w.r.t. the
binarized weights. The underlying proximal step has an efficient closed-form
solution, and the second-order information can be efficiently obtained from the
second moments already computed by the Adam optimizer. Experiments on both
feedforward and recurrent networks show that the proposed loss-aware
binarization algorithm outperforms existing binarization schemes, and is also
more robust for wide and deep networks. | http://arxiv.org/pdf/1611.01600 | Lu Hou, Quanming Yao, James T. Kwok | cs.NE, cs.LG | null | null | cs.NE | 20161105 | 20180510 | [
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1502.04390"
}
] |
1611.01603 | 33 | Super Bow! SO was an American football game cemmememeareee | wer SUMP ITIMIIN SEMMII LUTEIUEY Ulf. 2: t,t sadn Lovin, soma, Ane Football League { NFL] forthe 2015 s The American Footbal Conference ( did it champion Denver Broncos defeated the Net Footal ofeence(NFC}chanoion | Super Super, Super, Super, Super, Super SuperBowl title. The game was played on | February 72016 at levlestecurinthesan | BOW! Bowl, Bowl, Bowl, Bowl, Bow! Francisco Bay Area at Sant Ciara, California. âAs this was the 50th Super Sow, the leg 50 50 emphasized the âgolden anniversary" with various gold-themed inlatves as well as take temporarily suspending the tradition of raming each Super Bow! game with Roman place numerals (under whieh the game would have | been known as "Super Bowl") so that the > (lll | I il WW logo could prominently feature the Arabic numerals 5. WL LIME nitatves âââââ] hm Ten Te â : many | ll | | | | 1] | | hundreds, few, among, 15, | 1611.01603#33 | Bidirectional Attention Flow for Machine Comprehension | Machine comprehension (MC), answering a query about a given context
paragraph, requires modeling complex interactions between the context and the
query. Recently, attention mechanisms have been successfully extended to MC.
Typically these methods use attention to focus on a small portion of the
context and summarize it with a fixed-size vector, couple attentions
temporally, and/or often form a uni-directional attention. In this paper we
introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage
hierarchical process that represents the context at different levels of
granularity and uses bi-directional attention flow mechanism to obtain a
query-aware context representation without early summarization. Our
experimental evaluations show that our model achieves the state-of-the-art
results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze
test. | http://arxiv.org/pdf/1611.01603 | Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi | cs.CL | Published as a conference paper at ICLR 2017 | null | cs.CL | 20161105 | 20180621 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1611.01604"
},
{
"id": "1609.05284"
},
{
"id": "1610.09996"
},
{
"id": "1606.01549"
},
{
"id": "1511.02274"
},
{
"id": "1505.00387"
},
{
"id": "1611.01436"
},
{
"id": "1611.01724"
},
{
"id": "1607.04423"
}
] |
1611.01626 | 33 | here η â [0, 1] is a weighting parameter that controls how much of each update we apply. In the case where η = 0 the above scheme reduces to entropy regularized policy gradient. If η = 1 then it becomes a variant of (batch) Q-learning with an architecture similar to the dueling architecture (Wang et al., 2016). Intermediate values of η produce a hybrid between the two. Examining the update we see that two error terms are trading off. The ï¬rst term encourages consistency with critic, and the second term encourages optimality over time. However, since we know that under standard policy gradient the Bellman residual will be small, then it follows that adding a term that reduces that error should not make much difference at the ï¬xed point. That is, the updates should be complementary, pointing in the same general direction, at least far away from a ï¬xed point. This update can also be interpreted as an actor-critic update where the critic is given by a weighted combination of a standard critic and an optimizing critic. Yet another interpretation of the update is a combination of expected-SARSA and Q-learning, where the Q-values are parameterized as the sum of an advantage function and a value function.
# 4.2 PRACTICAL IMPLEMENTATION | 1611.01626#33 | Combining policy gradient and Q-learning | Policy gradient is an efficient technique for improving a policy in a
reinforcement learning setting. However, vanilla online variants are on-policy
only and not able to take advantage of off-policy data. In this paper we
describe a new technique that combines policy gradient with off-policy
Q-learning, drawing experience from a replay buffer. This is motivated by
making a connection between the fixed points of the regularized policy gradient
algorithm and the Q-values. This connection allows us to estimate the Q-values
from the action preferences of the policy, to which we apply Q-learning
updates. We refer to the new technique as 'PGQL', for policy gradient and
Q-learning. We also establish an equivalency between action-value fitting
techniques and actor-critic algorithms, showing that regularized policy
gradient techniques can be interpreted as advantage function learning
algorithms. We conclude with some numerical examples that demonstrate improved
data efficiency and stability of PGQL. In particular, we tested PGQL on the
full suite of Atari games and achieved performance exceeding that of both
asynchronous advantage actor-critic (A3C) and Q-learning. | http://arxiv.org/pdf/1611.01626 | Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih | cs.LG, cs.AI, math.OC, stat.ML | null | null | cs.LG | 20161105 | 20170407 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1609.00150"
},
{
"id": "1512.04105"
},
{
"id": "1511.05952"
},
{
"id": "1504.00702"
}
] |
1611.01673 | 33 | Published as a conference paper at ICLR 2017 A APPENDIX A.1 ACCELERATED CONVERGENCE & REDUCED VARIANCE See Figures 10, 11, 12, and 13. ~*""9 2000 4000 6000 8000 1000012000 Iteration # Figure 10: Generator objective, Fâ, averaged over 5 training runs on CelebA. Increasing N (# of D) accelerates convergence of F to steady state (solid line) and reduces its vari- ance, a? (filled shadow +1¢). Figure 11 pro- vides alternative evidence of GMAN-Oâs ac- celerated convergence. N=1 Original 1 Modified N=2,\=0 N=2,A=1 FAMD.G)) 0 5000 10000 15000 20000 25000 30000 Iteration # Figure 12: Generator objective, Fâ, averaged over 5 training runs on CIFAR-10. Increas- ing N (# of D) accelerates convergence of F to steady state (solid line) and reduces its variance, o? (filled shadow +10). Figure 13 provides alternative evidence of GMAN-0âs accelerated convergence. A.2. ADDITIONAL GMAM TABLES )) 10° | 2% 2 wee â aa | 1611.01673#33 | Generative Multi-Adversarial Networks | Generative adversarial networks (GANs) are a framework for producing a
generative model by way of a two-player minimax game. In this paper, we propose
the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that
extends GANs to multiple discriminators. In previous work, the successful
training of GANs requires modifying the minimax objective to accelerate
training early on. In contrast, GMAN can be reliably trained with the original,
untampered objective. We explore a number of design perspectives with the
discriminator role ranging from formidable adversary to forgiving teacher.
Image generation tasks comparing the proposed framework to standard GANs
demonstrate GMAN produces higher quality samples in a fraction of the
iterations when measured by a pairwise GAM-type metric. | http://arxiv.org/pdf/1611.01673 | Ishan Durugkar, Ian Gemp, Sridhar Mahadevan | cs.LG, cs.MA, cs.NE | Accepted as a conference paper (poster) at ICLR 2017 | null | cs.LG | 20161105 | 20170302 | [
{
"id": "1511.06390"
},
{
"id": "1511.05897"
},
{
"id": "1610.02920"
}
] |
1611.01576 | 34 | Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In ICLR, 2015.
David Balduzzi and Muhammad Ghifary. Strongly-typed recurrent neural networks. In ICML, 2016.
James Bradbury and Richard Socher. MetaMind neural machine translation system for WMT 2016. In Proceedings of the First Conference on Machine Translation, Berlin, Germany. Association for Computational Linguistics, 2016.
Yarin Gal and Zoubin Ghahramani. A theoretically grounded application of dropout in recurrent neural networks. In NIPS, 2016.
Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural Computation, 9(8): 1735â1780, Nov 1997. ISSN 0899-7667.
Gao Huang, Zhuang Liu, and Kilian Q Weinberger. Densely connected convolutional networks. arXiv preprint arXiv:1608.06993, 2016.
Rie Johnson and Tong Zhang. Effective use of word order for text categorization with convolutional neural networks. arXiv preprint arXiv:1412.1058, 2014. | 1611.01576#34 | Quasi-Recurrent Neural Networks | Recurrent neural networks are a powerful tool for modeling sequential data,
but the dependence of each timestep's computation on the previous timestep's
output limits parallelism and makes RNNs unwieldy for very long sequences. We
introduce quasi-recurrent neural networks (QRNNs), an approach to neural
sequence modeling that alternates convolutional layers, which apply in parallel
across timesteps, and a minimalist recurrent pooling function that applies in
parallel across channels. Despite lacking trainable recurrent layers, stacked
QRNNs have better predictive accuracy than stacked LSTMs of the same hidden
size. Due to their increased parallelism, they are up to 16 times faster at
train and test time. Experiments on language modeling, sentiment
classification, and character-level neural machine translation demonstrate
these advantages and underline the viability of QRNNs as a basic building block
for a variety of sequence tasks. | http://arxiv.org/pdf/1611.01576 | James Bradbury, Stephen Merity, Caiming Xiong, Richard Socher | cs.NE, cs.AI, cs.CL, cs.LG | Submitted to conference track at ICLR 2017 | null | cs.NE | 20161105 | 20161121 | [
{
"id": "1605.07725"
},
{
"id": "1508.06615"
},
{
"id": "1606.01305"
},
{
"id": "1610.10099"
},
{
"id": "1609.08144"
},
{
"id": "1602.00367"
},
{
"id": "1511.08630"
},
{
"id": "1609.07843"
},
{
"id": "1608.06993"
},
{
"id": "1610.03017"
},
{
"id": "1601.06759"
},
{
"id": "1606.02960"
}
] |
1611.01578 | 34 | Table 2: Single model perplexity on the test set of the Penn Treebank language modeling task. Parameter numbers with â¡ are estimates with reference to Merity et al. (2016).
The newly discovered cell is visualized in Figure 8 in Appendix A. The visualization reveals that the new cell has many similarities to the LSTM cell in the ï¬rst few steps, such as it likes to compute W1 â htâ1 + W2 â xt several times and send them to different components in the cell.
Transfer Learning Results: To understand whether the cell can generalize to a different task, we apply it to the character language modeling task on the same dataset. We use an experimental setup that is similar to Ha et al. (2016), but use variational dropout by Gal (2015). We also train our own LSTM with our setup to get a fair LSTM baseline. Models are trained for 80K steps and the best test set perplexity is taken according to the step where validation set perplexity is the best. The results on the test set of our method and state-of-art methods are reported in Table 3. The results on small settings with 5-6M parameters conï¬rm that the new cell does indeed generalize, and is better than the LSTM cell. | 1611.01578#34 | Neural Architecture Search with Reinforcement Learning | Neural networks are powerful and flexible models that work well for many
difficult learning tasks in image, speech and natural language understanding.
Despite their success, neural networks are still hard to design. In this paper,
we use a recurrent network to generate the model descriptions of neural
networks and train this RNN with reinforcement learning to maximize the
expected accuracy of the generated architectures on a validation set. On the
CIFAR-10 dataset, our method, starting from scratch, can design a novel network
architecture that rivals the best human-invented architecture in terms of test
set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is
0.09 percent better and 1.05x faster than the previous state-of-the-art model
that used a similar architectural scheme. On the Penn Treebank dataset, our
model can compose a novel recurrent cell that outperforms the widely-used LSTM
cell, and other state-of-the-art baselines. Our cell achieves a test set
perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than
the previous state-of-the-art model. The cell can also be transferred to the
character language modeling task on PTB and achieves a state-of-the-art
perplexity of 1.214. | http://arxiv.org/pdf/1611.01578 | Barret Zoph, Quoc V. Le | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20161105 | 20170215 | [
{
"id": "1611.01462"
},
{
"id": "1607.03474"
},
{
"id": "1603.05027"
},
{
"id": "1609.09106"
},
{
"id": "1511.06732"
},
{
"id": "1508.06615"
},
{
"id": "1606.04474"
},
{
"id": "1608.05859"
},
{
"id": "1609.08144"
},
{
"id": "1606.01885"
},
{
"id": "1505.00387"
},
{
"id": "1609.07843"
},
{
"id": "1512.05287"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1611.01600 | 34 | # 5 CONCLUSION
In this paper, we propose a binarization algorithm that directly considers its effect on the loss during binarization. The binarized weights are obtained using proximal Newton algorithm with diagonal Hessian approximation. The proximal step has an efï¬cient closed-form solution, and the second- order information in the Hessian can be readily obtained from the Adam optimizer. Experiments show that the proposed algorithm outperforms existing binarization schemes, has comparable per- formance as the original full-precision network, and is also robust for wide and deep networks.
ACKNOWLEDGMENTS
This research was supported in part by the Research Grants Council of the Hong Kong Special Administrative Region (Grant 614513). We thank Yongqi Zhang for helping with the experiments, and developers of Theano (Theano Development Team, 2016), Pylearn2 (Goodfellow et al., 2013) and Lasagne. We also thank NVIDIA for the support of Titan X GPU.
8
Published as a conference paper at ICLR 2017
# REFERENCES
M. Courbariaux, Y. Bengio, and J.P. David. BinaryConnect: Training deep neural networks with binary weights during propagations. In NIPS, pp. 3105â3113, 2015. | 1611.01600#34 | Loss-aware Binarization of Deep Networks | Deep neural network models, though very powerful and highly successful, are
computationally expensive in terms of space and time. Recently, there have been
a number of attempts on binarizing the network weights and activations. This
greatly reduces the network size, and replaces the underlying multiplications
to additions or even XNOR bit operations. However, existing binarization
schemes are based on simple matrix approximation and ignore the effect of
binarization on the loss. In this paper, we propose a proximal Newton algorithm
with diagonal Hessian approximation that directly minimizes the loss w.r.t. the
binarized weights. The underlying proximal step has an efficient closed-form
solution, and the second-order information can be efficiently obtained from the
second moments already computed by the Adam optimizer. Experiments on both
feedforward and recurrent networks show that the proposed loss-aware
binarization algorithm outperforms existing binarization schemes, and is also
more robust for wide and deep networks. | http://arxiv.org/pdf/1611.01600 | Lu Hou, Quanming Yao, James T. Kwok | cs.NE, cs.LG | null | null | cs.NE | 20161105 | 20180510 | [
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1502.04390"
}
] |
1611.01603 | 34 | 5. WL LIME nitatves âââââ] hm Ten Te â : many | ll | | | | 1] | | hundreds, few, among, 15, several, only, {3s} fom Warsaw, the Vistula rivers environment changes natural | | | natural, of Stkingy and featres a perfecty preserved ecosystem, with ahabitatof animalsthat | TES@rves reserves are mn WN NNN HAWN are, are, are, are, are, includes there i iak6w Lake, the lakes in th w Parks, Karnionek Lake. There are lakes inthe parks, but only afew in it them ot ponte jarsaw, Warsaw, Warsaw before winter to clean them of plants and Warsaw We â 2] TOTO AO EEE TMA ie species | 1611.01603#34 | Bidirectional Attention Flow for Machine Comprehension | Machine comprehension (MC), answering a query about a given context
paragraph, requires modeling complex interactions between the context and the
query. Recently, attention mechanisms have been successfully extended to MC.
Typically these methods use attention to focus on a small portion of the
context and summarize it with a fixed-size vector, couple attentions
temporally, and/or often form a uni-directional attention. In this paper we
introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage
hierarchical process that represents the context at different levels of
granularity and uses bi-directional attention flow mechanism to obtain a
query-aware context representation without early summarization. Our
experimental evaluations show that our model achieves the state-of-the-art
results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze
test. | http://arxiv.org/pdf/1611.01603 | Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi | cs.CL | Published as a conference paper at ICLR 2017 | null | cs.CL | 20161105 | 20180621 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1611.01604"
},
{
"id": "1609.05284"
},
{
"id": "1610.09996"
},
{
"id": "1606.01549"
},
{
"id": "1511.02274"
},
{
"id": "1505.00387"
},
{
"id": "1611.01436"
},
{
"id": "1611.01724"
},
{
"id": "1607.04423"
}
] |
1611.01626 | 34 | # 4.2 PRACTICAL IMPLEMENTATION
The updates presented in (14) are batch updates, with an exact critic QÏ. In practice we want to run this scheme online, with an estimate of the critic, where we donât necessarily apply the policy gradient update at the same time or from same data source as the Q-learning update.
Our proposal scheme is as follows. One or more agents interact with an environment, encountering states and rewards and performing on-policy updates of (shared) parameters using an actor-critic algorithm where both the policy and the critic are being updated online. Each time an agent receives new data from the environment it writes it to a shared replay memory buffer. Periodically a separate learner process samples from the replay buffer and performs a step of Q-learning on the parameters of the policy using (13). This scheme has several advantages. The critic can accumulate the Monte
7
Published as a conference paper at ICLR 2017
(a) Grid world. (b) Performance versus agent steps in grid world. | 1611.01626#34 | Combining policy gradient and Q-learning | Policy gradient is an efficient technique for improving a policy in a
reinforcement learning setting. However, vanilla online variants are on-policy
only and not able to take advantage of off-policy data. In this paper we
describe a new technique that combines policy gradient with off-policy
Q-learning, drawing experience from a replay buffer. This is motivated by
making a connection between the fixed points of the regularized policy gradient
algorithm and the Q-values. This connection allows us to estimate the Q-values
from the action preferences of the policy, to which we apply Q-learning
updates. We refer to the new technique as 'PGQL', for policy gradient and
Q-learning. We also establish an equivalency between action-value fitting
techniques and actor-critic algorithms, showing that regularized policy
gradient techniques can be interpreted as advantage function learning
algorithms. We conclude with some numerical examples that demonstrate improved
data efficiency and stability of PGQL. In particular, we tested PGQL on the
full suite of Atari games and achieved performance exceeding that of both
asynchronous advantage actor-critic (A3C) and Q-learning. | http://arxiv.org/pdf/1611.01626 | Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih | cs.LG, cs.AI, math.OC, stat.ML | null | null | cs.LG | 20161105 | 20170407 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1609.00150"
},
{
"id": "1512.04105"
},
{
"id": "1511.05952"
},
{
"id": "1504.00702"
}
] |
1611.01673 | 34 | evidence of GMAN-0âs accelerated convergence. A.2. ADDITIONAL GMAM TABLES )) 10° | 2% 2 wee â aa aaa 107 LAN gol ore" OQ 2000 4000 6000 8000 1000012000 Iteration # a Cumulative STD of F(WD,G » Oo Figure 11: Stdev, a, of the generator objec- tive over a sliding window of 500 iterations. Lower values indicate a more steady-state. GMAN-O0 with NV = 5 achieves steady-state at 2x speed of GAN (N = 1). Note Fig- ure 10âs filled shadows reveal stdev of F over runs, while this plot shows stdev over time. â N=1 Original = _ 1 Modified S S S102 = 10 is 2 & e $ â107 5 E 5 3 107 0 5000 10000 15000 20000 25000 30000 Iteration # Figure 13: Stdev, a, of the generator objec- tive over a sliding window of 500 iterations. Lower values indicate a more steady-state. GMAN-O with NV = 5 achieves steady-state at 2x speed of GAN (N = 1). Note Fig- ure 12âs filled shadows reveal stdev of F over runs, while this plot shows stdev over | 1611.01673#34 | Generative Multi-Adversarial Networks | Generative adversarial networks (GANs) are a framework for producing a
generative model by way of a two-player minimax game. In this paper, we propose
the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that
extends GANs to multiple discriminators. In previous work, the successful
training of GANs requires modifying the minimax objective to accelerate
training early on. In contrast, GMAN can be reliably trained with the original,
untampered objective. We explore a number of design perspectives with the
discriminator role ranging from formidable adversary to forgiving teacher.
Image generation tasks comparing the proposed framework to standard GANs
demonstrate GMAN produces higher quality samples in a fraction of the
iterations when measured by a pairwise GAM-type metric. | http://arxiv.org/pdf/1611.01673 | Ishan Durugkar, Ian Gemp, Sridhar Mahadevan | cs.LG, cs.MA, cs.NE | Accepted as a conference paper (poster) at ICLR 2017 | null | cs.LG | 20161105 | 20170302 | [
{
"id": "1511.06390"
},
{
"id": "1511.05897"
},
{
"id": "1610.02920"
}
] |
1611.01576 | 35 | Rie Johnson and Tong Zhang. Effective use of word order for text categorization with convolutional neural networks. arXiv preprint arXiv:1412.1058, 2014.
Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, and Koray Kavukcuoglu. Neural machine translation in linear time. arXiv preprint arXiv:1610.10099, 2016.
Yoon Kim, Yacine Jernite, David Sontag, and Alexander M. Rush. Character-aware neural language models. arXiv preprint arXiv:1508.06615, 2016.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. ImageNet classiï¬cation with deep convo- lutional neural networks. In NIPS, 2012. | 1611.01576#35 | Quasi-Recurrent Neural Networks | Recurrent neural networks are a powerful tool for modeling sequential data,
but the dependence of each timestep's computation on the previous timestep's
output limits parallelism and makes RNNs unwieldy for very long sequences. We
introduce quasi-recurrent neural networks (QRNNs), an approach to neural
sequence modeling that alternates convolutional layers, which apply in parallel
across timesteps, and a minimalist recurrent pooling function that applies in
parallel across channels. Despite lacking trainable recurrent layers, stacked
QRNNs have better predictive accuracy than stacked LSTMs of the same hidden
size. Due to their increased parallelism, they are up to 16 times faster at
train and test time. Experiments on language modeling, sentiment
classification, and character-level neural machine translation demonstrate
these advantages and underline the viability of QRNNs as a basic building block
for a variety of sequence tasks. | http://arxiv.org/pdf/1611.01576 | James Bradbury, Stephen Merity, Caiming Xiong, Richard Socher | cs.NE, cs.AI, cs.CL, cs.LG | Submitted to conference track at ICLR 2017 | null | cs.NE | 20161105 | 20161121 | [
{
"id": "1605.07725"
},
{
"id": "1508.06615"
},
{
"id": "1606.01305"
},
{
"id": "1610.10099"
},
{
"id": "1609.08144"
},
{
"id": "1602.00367"
},
{
"id": "1511.08630"
},
{
"id": "1609.07843"
},
{
"id": "1608.06993"
},
{
"id": "1610.03017"
},
{
"id": "1601.06759"
},
{
"id": "1606.02960"
}
] |
1611.01578 | 35 | Additionally, we carry out a larger experiment where the model has 16.28M parameters. This model has a weight decay rate of 1e â 4, was trained for 600K steps (longer than the above models) and the test perplexity is taken where the validation set perplexity is highest. We use dropout rates of 0.2 and 0.5 as described in Gal (2015), but do not use embedding dropout. We use the ADAM optimizer with a learning rate of 0.001 and an input embedding size of 128. Our model had two layers with 800 hidden units. We used a minibatch size of 32 and BPTT length of 100. With this setting, our model achieves 1.214 perplexity, which is the new state-of-the-art result on this task.
Finally, we also drop our cell into the GNMT framework (Wu et al., 2016), which was previously tuned for LSTM cells, and train an WMT14 English â German translation model. The GNMT
9
# Under review as a conference paper at ICLR 2017 | 1611.01578#35 | Neural Architecture Search with Reinforcement Learning | Neural networks are powerful and flexible models that work well for many
difficult learning tasks in image, speech and natural language understanding.
Despite their success, neural networks are still hard to design. In this paper,
we use a recurrent network to generate the model descriptions of neural
networks and train this RNN with reinforcement learning to maximize the
expected accuracy of the generated architectures on a validation set. On the
CIFAR-10 dataset, our method, starting from scratch, can design a novel network
architecture that rivals the best human-invented architecture in terms of test
set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is
0.09 percent better and 1.05x faster than the previous state-of-the-art model
that used a similar architectural scheme. On the Penn Treebank dataset, our
model can compose a novel recurrent cell that outperforms the widely-used LSTM
cell, and other state-of-the-art baselines. Our cell achieves a test set
perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than
the previous state-of-the-art model. The cell can also be transferred to the
character language modeling task on PTB and achieves a state-of-the-art
perplexity of 1.214. | http://arxiv.org/pdf/1611.01578 | Barret Zoph, Quoc V. Le | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20161105 | 20170215 | [
{
"id": "1611.01462"
},
{
"id": "1607.03474"
},
{
"id": "1603.05027"
},
{
"id": "1609.09106"
},
{
"id": "1511.06732"
},
{
"id": "1508.06615"
},
{
"id": "1606.04474"
},
{
"id": "1608.05859"
},
{
"id": "1609.08144"
},
{
"id": "1606.01885"
},
{
"id": "1505.00387"
},
{
"id": "1609.07843"
},
{
"id": "1512.05287"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1611.01600 | 35 | Y. Dauphin, H. de Vries, and Y. Bengio. Equilibrated adaptive learning rates for non-convex opti- mization. In NIPS, pp. 1504â1512, 2015a.
Y. Dauphin, H. de Vries, J. Chung, and Y. Bengio. RMSprop and equilibrated adaptive learning rates for non-convex optimization. Technical Report arXiv:1502.04390, 2015b.
J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121â2159, 2011.
X. Glorot and Y. Bengio. Understanding the difï¬culty of training deep feedforward neural networks. In AISTAT, pp. 249â256, 2010.
Y. Gong, L. Liu, M. Yang, and L. Bourdev. Compressing deep convolutional networks using vector quantization. Technical Report arXiv:1412.6115, 2014. | 1611.01600#35 | Loss-aware Binarization of Deep Networks | Deep neural network models, though very powerful and highly successful, are
computationally expensive in terms of space and time. Recently, there have been
a number of attempts on binarizing the network weights and activations. This
greatly reduces the network size, and replaces the underlying multiplications
to additions or even XNOR bit operations. However, existing binarization
schemes are based on simple matrix approximation and ignore the effect of
binarization on the loss. In this paper, we propose a proximal Newton algorithm
with diagonal Hessian approximation that directly minimizes the loss w.r.t. the
binarized weights. The underlying proximal step has an efficient closed-form
solution, and the second-order information can be efficiently obtained from the
second moments already computed by the Adam optimizer. Experiments on both
feedforward and recurrent networks show that the proposed loss-aware
binarization algorithm outperforms existing binarization schemes, and is also
more robust for wide and deep networks. | http://arxiv.org/pdf/1611.01600 | Lu Hou, Quanming Yao, James T. Kwok | cs.NE, cs.LG | null | null | cs.NE | 20161105 | 20180510 | [
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1502.04390"
}
] |
1611.01603 | 35 | Figure 3: Attention matrices for question-context tuples. The left palette shows the context paragraph (correct answer in red and underlined), the middle palette shows the attention matrix (each row is a question word, each column is a context word), and the right palette shows the top attention points for each question word, above a threshold.
correctly answered by the baseline. The 14% that are incorrectly answered does not have a clear pattern. This suggests that neural architectures are able to exploit much of the information captured by the language features. We also break this comparison down by the ï¬rst words in the questions (Figure 2c). Our model outperforms the traditional baseline comfortably in every category.
Error Analysis. We randomly select 50 incorrect questions (based on EM) and categorize them into 6 classes. 50% of errors are due to the imprecise boundaries of the answers, 28% involve syntactic complications and ambiguities, 14% are paraphrase problems, 4% require external knowl- edge, 2% need multiple sentences to answer, and 2% are due to mistakes during tokenization. See Appendix A for the examples of the error modes.
# 5 CLOZE TEST EXPERIMENTS
We also evaluate our model on the task of cloze-style reading comprehension using the CNN and Daily Mail datasets (Hermann et al., 2015). | 1611.01603#35 | Bidirectional Attention Flow for Machine Comprehension | Machine comprehension (MC), answering a query about a given context
paragraph, requires modeling complex interactions between the context and the
query. Recently, attention mechanisms have been successfully extended to MC.
Typically these methods use attention to focus on a small portion of the
context and summarize it with a fixed-size vector, couple attentions
temporally, and/or often form a uni-directional attention. In this paper we
introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage
hierarchical process that represents the context at different levels of
granularity and uses bi-directional attention flow mechanism to obtain a
query-aware context representation without early summarization. Our
experimental evaluations show that our model achieves the state-of-the-art
results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze
test. | http://arxiv.org/pdf/1611.01603 | Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi | cs.CL | Published as a conference paper at ICLR 2017 | null | cs.CL | 20161105 | 20180621 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1611.01604"
},
{
"id": "1609.05284"
},
{
"id": "1610.09996"
},
{
"id": "1606.01549"
},
{
"id": "1511.02274"
},
{
"id": "1505.00387"
},
{
"id": "1611.01436"
},
{
"id": "1611.01724"
},
{
"id": "1607.04423"
}
] |
1611.01626 | 35 | Figure 1: Grid world experiment.
Carlo return over many time periods, allowing us to spread the inï¬uence of a reward received in the future backwards in time. Furthermore, the replay buffer can be used to store and replay âimportantâ past experiences by prioritizing those samples (Schaul et al., 2015). The use of the replay buffer can help to reduce problems associated with correlated training data, as generated by an agent explor- ing an environment where the states are likely to be similar from one time step to the next. Also the use of replay can act as a kind of regularizer, preventing the policy from moving too far from satisfying the Bellman equation, thereby improving stability, in a similar sense to that of a policy âtrust-regionâ (Schulman et al., 2015). Moreover, by batching up replay samples to update the net- work we can leverage GPUs to perform the updates quickly, this is in comparison to pure policy gradient techniques which are generally implemented on CPU (Mnih et al., 2016). | 1611.01626#35 | Combining policy gradient and Q-learning | Policy gradient is an efficient technique for improving a policy in a
reinforcement learning setting. However, vanilla online variants are on-policy
only and not able to take advantage of off-policy data. In this paper we
describe a new technique that combines policy gradient with off-policy
Q-learning, drawing experience from a replay buffer. This is motivated by
making a connection between the fixed points of the regularized policy gradient
algorithm and the Q-values. This connection allows us to estimate the Q-values
from the action preferences of the policy, to which we apply Q-learning
updates. We refer to the new technique as 'PGQL', for policy gradient and
Q-learning. We also establish an equivalency between action-value fitting
techniques and actor-critic algorithms, showing that regularized policy
gradient techniques can be interpreted as advantage function learning
algorithms. We conclude with some numerical examples that demonstrate improved
data efficiency and stability of PGQL. In particular, we tested PGQL on the
full suite of Atari games and achieved performance exceeding that of both
asynchronous advantage actor-critic (A3C) and Q-learning. | http://arxiv.org/pdf/1611.01626 | Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih | cs.LG, cs.AI, math.OC, stat.ML | null | null | cs.LG | 20161105 | 20170407 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1609.00150"
},
{
"id": "1512.04105"
},
{
"id": "1511.05952"
},
{
"id": "1504.00702"
}
] |
1611.01576 | 36 | David Krueger, Tegan Maharaj, J´anos Kram´ar, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Hugo Larochelle, Aaron Courville, et al. Zoneout: Regu- larizing RNNs by Randomly Preserving Hidden Activations. arXiv preprint arXiv:1606.01305, 2016.
Ankit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, Victor Zhong, Romain Paulus, and Richard Socher. Ask me anything: Dynamic memory networks for natural language processing. In ICML, 2016.
Jason Lee, Kyunghyun Cho, and Thomas Hofmann. Fully character-level neural machine translation without explicit segmentation. arXiv preprint arXiv:1610.03017, 2016.
Shayne Longpre, Sabeek Pradhan, Caiming Xiong, and Richard Socher. A way out of the odyssey: Analyzing and combining recent insights for LSTMs. Submitted to ICLR, 2016.
M. T. Luong, H. Pham, and C. D. Manning. Effective approaches to attention-based neural machine translation. In EMNLP, 2015. | 1611.01576#36 | Quasi-Recurrent Neural Networks | Recurrent neural networks are a powerful tool for modeling sequential data,
but the dependence of each timestep's computation on the previous timestep's
output limits parallelism and makes RNNs unwieldy for very long sequences. We
introduce quasi-recurrent neural networks (QRNNs), an approach to neural
sequence modeling that alternates convolutional layers, which apply in parallel
across timesteps, and a minimalist recurrent pooling function that applies in
parallel across channels. Despite lacking trainable recurrent layers, stacked
QRNNs have better predictive accuracy than stacked LSTMs of the same hidden
size. Due to their increased parallelism, they are up to 16 times faster at
train and test time. Experiments on language modeling, sentiment
classification, and character-level neural machine translation demonstrate
these advantages and underline the viability of QRNNs as a basic building block
for a variety of sequence tasks. | http://arxiv.org/pdf/1611.01576 | James Bradbury, Stephen Merity, Caiming Xiong, Richard Socher | cs.NE, cs.AI, cs.CL, cs.LG | Submitted to conference track at ICLR 2017 | null | cs.NE | 20161105 | 20161121 | [
{
"id": "1605.07725"
},
{
"id": "1508.06615"
},
{
"id": "1606.01305"
},
{
"id": "1610.10099"
},
{
"id": "1609.08144"
},
{
"id": "1602.00367"
},
{
"id": "1511.08630"
},
{
"id": "1609.07843"
},
{
"id": "1608.06993"
},
{
"id": "1610.03017"
},
{
"id": "1601.06759"
},
{
"id": "1606.02960"
}
] |
1611.01578 | 36 | 9
# Under review as a conference paper at ICLR 2017
RNN Cell Type Ha et al. (2016) - Layer Norm HyperLSTM Ha et al. (2016) - Layer Norm HyperLSTM Large Embeddings Ha et al. (2016) - 2-Layer Norm HyperLSTM 4.92M 5.06M 14.41M 1.250 1.233 1.219 Two layer LSTM Two Layer with New Cell Two Layer with New Cell 6.57M 6.57M 16.28M 1.243 1.228 1.214
Table 3: Comparison between our cell and state-of-art methods on PTB character modeling. The new cell was found on word level language modeling. | 1611.01578#36 | Neural Architecture Search with Reinforcement Learning | Neural networks are powerful and flexible models that work well for many
difficult learning tasks in image, speech and natural language understanding.
Despite their success, neural networks are still hard to design. In this paper,
we use a recurrent network to generate the model descriptions of neural
networks and train this RNN with reinforcement learning to maximize the
expected accuracy of the generated architectures on a validation set. On the
CIFAR-10 dataset, our method, starting from scratch, can design a novel network
architecture that rivals the best human-invented architecture in terms of test
set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is
0.09 percent better and 1.05x faster than the previous state-of-the-art model
that used a similar architectural scheme. On the Penn Treebank dataset, our
model can compose a novel recurrent cell that outperforms the widely-used LSTM
cell, and other state-of-the-art baselines. Our cell achieves a test set
perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than
the previous state-of-the-art model. The cell can also be transferred to the
character language modeling task on PTB and achieves a state-of-the-art
perplexity of 1.214. | http://arxiv.org/pdf/1611.01578 | Barret Zoph, Quoc V. Le | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20161105 | 20170215 | [
{
"id": "1611.01462"
},
{
"id": "1607.03474"
},
{
"id": "1603.05027"
},
{
"id": "1609.09106"
},
{
"id": "1511.06732"
},
{
"id": "1508.06615"
},
{
"id": "1606.04474"
},
{
"id": "1608.05859"
},
{
"id": "1609.08144"
},
{
"id": "1606.01885"
},
{
"id": "1505.00387"
},
{
"id": "1609.07843"
},
{
"id": "1512.05287"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1611.01600 | 36 | I.J. Goodfellow, D. Warde-Farley, P. Lamblin, V. Dumoulin, M. Mirza, R. Pascanu, J. Bergstra, F. Bastien, and Y. Bengio. Pylearn2: a machine learning research library. arXiv preprint arXiv:1308.4214, 2013.
S. Han, H. Mao, and W.J. Dally. Deep compression: Compressing deep neural network with pruning, trained quantization and Huffman coding. In ICLR, 2016.
S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, pp. 1735â1780, 1997.
I. Hubara, M. Courbariaux, D. Soudry, R. El-Yaniv, and Y. Bengio. Binarized neural networks. In NIPS, pp. 4107â4115, 2016.
A. Karpathy, J. Johnson, and F.-F. Li. Visualizing and understanding recurrent networks. In ICLR, 2016.
Y.-D. Kim, E. Park, S. Yoo, T. Choi, L. Yang, and D. Shin. Compression of deep convolutional neural networks for fast and low power mobile applications. In ICLR, 2016. | 1611.01600#36 | Loss-aware Binarization of Deep Networks | Deep neural network models, though very powerful and highly successful, are
computationally expensive in terms of space and time. Recently, there have been
a number of attempts on binarizing the network weights and activations. This
greatly reduces the network size, and replaces the underlying multiplications
to additions or even XNOR bit operations. However, existing binarization
schemes are based on simple matrix approximation and ignore the effect of
binarization on the loss. In this paper, we propose a proximal Newton algorithm
with diagonal Hessian approximation that directly minimizes the loss w.r.t. the
binarized weights. The underlying proximal step has an efficient closed-form
solution, and the second-order information can be efficiently obtained from the
second moments already computed by the Adam optimizer. Experiments on both
feedforward and recurrent networks show that the proposed loss-aware
binarization algorithm outperforms existing binarization schemes, and is also
more robust for wide and deep networks. | http://arxiv.org/pdf/1611.01600 | Lu Hou, Quanming Yao, James T. Kwok | cs.NE, cs.LG | null | null | cs.NE | 20161105 | 20180510 | [
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1502.04390"
}
] |
1611.01603 | 36 | We also evaluate our model on the task of cloze-style reading comprehension using the CNN and Daily Mail datasets (Hermann et al., 2015).
Dataset. In a cloze test, the reader is asked to ï¬ll in words that have been removed from a passage, for measuring oneâs ability to comprehend text. Hermann et al. (2015) have recently compiled a mas- sive Cloze-style comprehension dataset, consisting of 300k/4k/3k and 879k/65k/53k (train/dev/test) examples from CNN and DailyMail news articles, respectively. Each example has a news article and an incomplete sentence extracted from the human-written summary of the article. To distinguish this task from language modeling and force one to refer to the article to predict the correct missing word, the missing word is always a named entity, anonymized with a random ID. Also, the IDs must be shufï¬ed constantly during test, which is also critical for full anonymization. | 1611.01603#36 | Bidirectional Attention Flow for Machine Comprehension | Machine comprehension (MC), answering a query about a given context
paragraph, requires modeling complex interactions between the context and the
query. Recently, attention mechanisms have been successfully extended to MC.
Typically these methods use attention to focus on a small portion of the
context and summarize it with a fixed-size vector, couple attentions
temporally, and/or often form a uni-directional attention. In this paper we
introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage
hierarchical process that represents the context at different levels of
granularity and uses bi-directional attention flow mechanism to obtain a
query-aware context representation without early summarization. Our
experimental evaluations show that our model achieves the state-of-the-art
results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze
test. | http://arxiv.org/pdf/1611.01603 | Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi | cs.CL | Published as a conference paper at ICLR 2017 | null | cs.CL | 20161105 | 20180621 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1611.01604"
},
{
"id": "1609.05284"
},
{
"id": "1610.09996"
},
{
"id": "1606.01549"
},
{
"id": "1511.02274"
},
{
"id": "1505.00387"
},
{
"id": "1611.01436"
},
{
"id": "1611.01724"
},
{
"id": "1607.04423"
}
] |
1611.01626 | 36 | Since we perform Q-learning using samples from a replay buffer that were generated by a old policy we are performing (slightly) off-policy learning. However, Q-learning is known to converge to the optimal Q-values in the off-policy tabular case (under certain conditions) (Sutton & Barto, 1998), and has shown good performance off-policy in the function approximation case (Mnih et al., 2013).
4.3 MODIFIED FIXED POINT
The PGQL updates in equation (14) have modiï¬ed the ï¬xed point of the algorithm, so the analysis of §3 is no longer valid. Considering the tabular case once again, it is still the case that the policy Ï â exp( ËQÏ/α) as before, where ËQÏ is deï¬ned by (12), however where previously the ï¬xed point satisï¬ed ËQÏ = QÏ, with QÏ corresponding to the Q-values induced by Ï, now we have | 1611.01626#36 | Combining policy gradient and Q-learning | Policy gradient is an efficient technique for improving a policy in a
reinforcement learning setting. However, vanilla online variants are on-policy
only and not able to take advantage of off-policy data. In this paper we
describe a new technique that combines policy gradient with off-policy
Q-learning, drawing experience from a replay buffer. This is motivated by
making a connection between the fixed points of the regularized policy gradient
algorithm and the Q-values. This connection allows us to estimate the Q-values
from the action preferences of the policy, to which we apply Q-learning
updates. We refer to the new technique as 'PGQL', for policy gradient and
Q-learning. We also establish an equivalency between action-value fitting
techniques and actor-critic algorithms, showing that regularized policy
gradient techniques can be interpreted as advantage function learning
algorithms. We conclude with some numerical examples that demonstrate improved
data efficiency and stability of PGQL. In particular, we tested PGQL on the
full suite of Atari games and achieved performance exceeding that of both
asynchronous advantage actor-critic (A3C) and Q-learning. | http://arxiv.org/pdf/1611.01626 | Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih | cs.LG, cs.AI, math.OC, stat.ML | null | null | cs.LG | 20161105 | 20170407 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1609.00150"
},
{
"id": "1512.04105"
},
{
"id": "1511.05952"
},
{
"id": "1504.00702"
}
] |
1611.01673 | 36 | Published as a conference paper at ICLR 2017 Table 2: Pairwise GMAM metric means for select models on MNIST. For each column, a positive GMAM indicates better performance relative to the row opponent; negative implies worse. Scores are obtained by summing each column. | Score | Variant | GMAN-O | GMAN-1 | GMAN* | mod-GAN | t 0.172 | GMAN-0 - â0.022 â0.062 â0.088 5 | 0.050 | GMAN-1 0.022 - 0.006 â0.078 3 | â0.055 | GMAN* 0.062 â0.006 - â0.001 | â0.167 | mod-GAN 0.088 0.078 0.001 - Table 3: Pairwise GMAM metric means for select models on CIFAR-10. For each column, a positive GMAM indicates better performance relative to the row opponent; negative implies worse. Scores are obtained by summing each column. GMAN variants were trained with two discriminators. GMAN-0 | GMAN-1 | mod-GAN_| _GMAN* Score | 5.878 = 0.193 | 5.765 £ 0.108 | 5.738 £ 0.176 | 5.539 + | 1611.01673#36 | Generative Multi-Adversarial Networks | Generative adversarial networks (GANs) are a framework for producing a
generative model by way of a two-player minimax game. In this paper, we propose
the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that
extends GANs to multiple discriminators. In previous work, the successful
training of GANs requires modifying the minimax objective to accelerate
training early on. In contrast, GMAN can be reliably trained with the original,
untampered objective. We explore a number of design perspectives with the
discriminator role ranging from formidable adversary to forgiving teacher.
Image generation tasks comparing the proposed framework to standard GANs
demonstrate GMAN produces higher quality samples in a fraction of the
iterations when measured by a pairwise GAM-type metric. | http://arxiv.org/pdf/1611.01673 | Ishan Durugkar, Ian Gemp, Sridhar Mahadevan | cs.LG, cs.MA, cs.NE | Accepted as a conference paper (poster) at ICLR 2017 | null | cs.LG | 20161105 | 20170302 | [
{
"id": "1511.06390"
},
{
"id": "1511.05897"
},
{
"id": "1610.02920"
}
] |
1611.01576 | 37 | M. T. Luong, H. Pham, and C. D. Manning. Effective approaches to attention-based neural machine translation. In EMNLP, 2015.
Andrew L Maas, Andrew Y Ng, and Christopher Potts. Multi-dimensional sentiment analysis with learned representations. Technical report, 2011.
Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models. arXiv preprint arXiv:1609.07843, 2016.
Gr´egoire Mesnil, Tomas Mikolov, MarcâAurelio Ranzato, and Yoshua Bengio. Ensemble of gen- erative and discriminative techniques for sentiment analysis of movie reviews. arXiv preprint arXiv:1412.5335, 2014.
Tomas Mikolov, Martin Karaï¬Â´at, Luk´as Burget, Jan Cernock´y, and Sanjeev Khudanpur. Recurrent neural network based language model. In INTERSPEECH, 2010.
9
# Under review as a conference paper at ICLR 2017
Takeru Miyato, Andrew M Dai, and Ian Goodfellow. Virtual adversarial training for semi-supervised text classiï¬cation. arXiv preprint arXiv:1605.07725, 2016. | 1611.01576#37 | Quasi-Recurrent Neural Networks | Recurrent neural networks are a powerful tool for modeling sequential data,
but the dependence of each timestep's computation on the previous timestep's
output limits parallelism and makes RNNs unwieldy for very long sequences. We
introduce quasi-recurrent neural networks (QRNNs), an approach to neural
sequence modeling that alternates convolutional layers, which apply in parallel
across timesteps, and a minimalist recurrent pooling function that applies in
parallel across channels. Despite lacking trainable recurrent layers, stacked
QRNNs have better predictive accuracy than stacked LSTMs of the same hidden
size. Due to their increased parallelism, they are up to 16 times faster at
train and test time. Experiments on language modeling, sentiment
classification, and character-level neural machine translation demonstrate
these advantages and underline the viability of QRNNs as a basic building block
for a variety of sequence tasks. | http://arxiv.org/pdf/1611.01576 | James Bradbury, Stephen Merity, Caiming Xiong, Richard Socher | cs.NE, cs.AI, cs.CL, cs.LG | Submitted to conference track at ICLR 2017 | null | cs.NE | 20161105 | 20161121 | [
{
"id": "1605.07725"
},
{
"id": "1508.06615"
},
{
"id": "1606.01305"
},
{
"id": "1610.10099"
},
{
"id": "1609.08144"
},
{
"id": "1602.00367"
},
{
"id": "1511.08630"
},
{
"id": "1609.07843"
},
{
"id": "1608.06993"
},
{
"id": "1610.03017"
},
{
"id": "1601.06759"
},
{
"id": "1606.02960"
}
] |
1611.01578 | 37 | Table 3: Comparison between our cell and state-of-art methods on PTB character modeling. The new cell was found on word level language modeling.
network has 8 layers in the encoder, 8 layers in the decoder. The ï¬rst layer of the encoder has bidirectional connections. The attention module is a neural network with 1 hidden layer. When a LSTM cell is used, the number of hidden units in each layer is 1024. The model is trained in a distributed setting with a parameter sever and 12 workers. Additionally, each worker uses 8 GPUs and a minibatch of 128. We use Adam with a learning rate of 0.0002 in the ï¬rst 60K training steps, and SGD with a learning rate of 0.5 until 400K steps. After that the learning rate is annealed by dividing by 2 after every 100K steps until it reaches 0.1. Training is stopped at 800K steps. More details can be found in Wu et al. (2016). | 1611.01578#37 | Neural Architecture Search with Reinforcement Learning | Neural networks are powerful and flexible models that work well for many
difficult learning tasks in image, speech and natural language understanding.
Despite their success, neural networks are still hard to design. In this paper,
we use a recurrent network to generate the model descriptions of neural
networks and train this RNN with reinforcement learning to maximize the
expected accuracy of the generated architectures on a validation set. On the
CIFAR-10 dataset, our method, starting from scratch, can design a novel network
architecture that rivals the best human-invented architecture in terms of test
set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is
0.09 percent better and 1.05x faster than the previous state-of-the-art model
that used a similar architectural scheme. On the Penn Treebank dataset, our
model can compose a novel recurrent cell that outperforms the widely-used LSTM
cell, and other state-of-the-art baselines. Our cell achieves a test set
perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than
the previous state-of-the-art model. The cell can also be transferred to the
character language modeling task on PTB and achieves a state-of-the-art
perplexity of 1.214. | http://arxiv.org/pdf/1611.01578 | Barret Zoph, Quoc V. Le | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20161105 | 20170215 | [
{
"id": "1611.01462"
},
{
"id": "1607.03474"
},
{
"id": "1603.05027"
},
{
"id": "1609.09106"
},
{
"id": "1511.06732"
},
{
"id": "1508.06615"
},
{
"id": "1606.04474"
},
{
"id": "1608.05859"
},
{
"id": "1609.08144"
},
{
"id": "1606.01885"
},
{
"id": "1505.00387"
},
{
"id": "1609.07843"
},
{
"id": "1512.05287"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1611.01600 | 37 | D. Kingma and J. Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature, 521(7553):436â444, 2015.
J.D. Lee, Y. Sun, and M.A. Saunders. Proximal Newton-type methods for minimizing composite functions. SIAM Journal on Optimization, 24(3):1420â1443, 2014.
F. Li and B. Liu. Ternary weight networks. Technical Report arXiv:1605.04711, 2016.
Z. Lin, M. Courbariaux, R. Memisevic, and Y. Bengio. Neural networks with few multiplications. In ICLR, 2016.
J. Martens and I. Sutskever. Training deep and recurrent networks with Hessian-free optimization. In Neural Networks: Tricks of the trade, pp. 479â535. Springer, 2012.
A. Novikov, D. Podoprikhin, A. Osokin, and D.P. Vetrov. Tensorizing neural networks. In NIPS, pp. 442â450, 2015.
R. Pascanu and Y. Bengio. Revisiting natural gradient for deep networks. In ICLR, 2014. | 1611.01600#37 | Loss-aware Binarization of Deep Networks | Deep neural network models, though very powerful and highly successful, are
computationally expensive in terms of space and time. Recently, there have been
a number of attempts on binarizing the network weights and activations. This
greatly reduces the network size, and replaces the underlying multiplications
to additions or even XNOR bit operations. However, existing binarization
schemes are based on simple matrix approximation and ignore the effect of
binarization on the loss. In this paper, we propose a proximal Newton algorithm
with diagonal Hessian approximation that directly minimizes the loss w.r.t. the
binarized weights. The underlying proximal step has an efficient closed-form
solution, and the second-order information can be efficiently obtained from the
second moments already computed by the Adam optimizer. Experiments on both
feedforward and recurrent networks show that the proposed loss-aware
binarization algorithm outperforms existing binarization schemes, and is also
more robust for wide and deep networks. | http://arxiv.org/pdf/1611.01600 | Lu Hou, Quanming Yao, James T. Kwok | cs.NE, cs.LG | null | null | cs.NE | 20161105 | 20180510 | [
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1502.04390"
}
] |
1611.01603 | 37 | Model Details. The model architecture used for this task is very similar to that for SQuAD (Sec- tion 4) with only a few small changes to adapt it to the cloze test. Since each answer in the CNN/DailyMail datasets is always a single word (entity), we only need to predict the start index (p1); the prediction for the end index (p2) is omitted from the loss function. Also, we mask out all non-entity words in the ï¬nal classiï¬cation layer so that they are forced to be excluded from possible answers. Another important difference from SQuAD is that the answer entity might appear more than once in the context paragraph. To address this, we follow a similar strategy from Kadlec et al. (2016). During training, after we obtain p1, we sum all probability values of the entity instances
8
13, 9
Published as a conference paper at ICLR 2017 | 1611.01603#37 | Bidirectional Attention Flow for Machine Comprehension | Machine comprehension (MC), answering a query about a given context
paragraph, requires modeling complex interactions between the context and the
query. Recently, attention mechanisms have been successfully extended to MC.
Typically these methods use attention to focus on a small portion of the
context and summarize it with a fixed-size vector, couple attentions
temporally, and/or often form a uni-directional attention. In this paper we
introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage
hierarchical process that represents the context at different levels of
granularity and uses bi-directional attention flow mechanism to obtain a
query-aware context representation without early summarization. Our
experimental evaluations show that our model achieves the state-of-the-art
results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze
test. | http://arxiv.org/pdf/1611.01603 | Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi | cs.CL | Published as a conference paper at ICLR 2017 | null | cs.CL | 20161105 | 20180621 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1611.01604"
},
{
"id": "1609.05284"
},
{
"id": "1610.09996"
},
{
"id": "1606.01549"
},
{
"id": "1511.02274"
},
{
"id": "1505.00387"
},
{
"id": "1611.01436"
},
{
"id": "1611.01724"
},
{
"id": "1607.04423"
}
] |
1611.01626 | 37 | Q⢠= (1 n)Q" +nT*Q", (1s) Or equivalently, if 7 < 1, we have Q⢠= (1 â 7) ro 1*(T*)*Qâ¢. In the appendix we show that |Q⢠â Qâ¢|| > 0 and that ||7*Q* â Qâ|| > 0 with decreasing a in the tabular case. That is, for small a the induced Q-values and the Q-values estimated from the policy are close, and we still have the guarantee that in the limit the Q-values are optimal. In other words, we have not perturbed the policy very much by the addition of the auxiliary update.
# 5 NUMERICAL EXPERIMENTS
5.1 GRID WORLD
In this section we discuss the results of running PGQL on a toy 4 by 6 grid world, as shown in Figure 1a. The agent always begins in the square marked âSâ and the episode continues until it reaches the square marked âTâ, upon which it receives a reward of 1. All other times it receives no reward. For this experiment we chose regularization parameter α = 0.001 and discount factor γ = 0.95.
Figure 1b shows the performance traces of three different agents learning in the grid world, running from the same initial random seed. The lines show the true expected performance of the policy
8 | 1611.01626#37 | Combining policy gradient and Q-learning | Policy gradient is an efficient technique for improving a policy in a
reinforcement learning setting. However, vanilla online variants are on-policy
only and not able to take advantage of off-policy data. In this paper we
describe a new technique that combines policy gradient with off-policy
Q-learning, drawing experience from a replay buffer. This is motivated by
making a connection between the fixed points of the regularized policy gradient
algorithm and the Q-values. This connection allows us to estimate the Q-values
from the action preferences of the policy, to which we apply Q-learning
updates. We refer to the new technique as 'PGQL', for policy gradient and
Q-learning. We also establish an equivalency between action-value fitting
techniques and actor-critic algorithms, showing that regularized policy
gradient techniques can be interpreted as advantage function learning
algorithms. We conclude with some numerical examples that demonstrate improved
data efficiency and stability of PGQL. In particular, we tested PGQL on the
full suite of Atari games and achieved performance exceeding that of both
asynchronous advantage actor-critic (A3C) and Q-learning. | http://arxiv.org/pdf/1611.01626 | Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih | cs.LG, cs.AI, math.OC, stat.ML | null | null | cs.LG | 20161105 | 20170407 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1609.00150"
},
{
"id": "1512.04105"
},
{
"id": "1511.05952"
},
{
"id": "1504.00702"
}
] |
1611.01673 | 37 | _GMAN* Score | 5.878 = 0.193 | 5.765 £ 0.108 | 5.738 £ 0.176 | 5.539 + 0.099 Table 4: Inception score means with standard deviations for select models on CIFAR-10. Higher scores are better. GMAN variants were trained with two discriminators. | Score | Variant | GMAN-0 | GMAN* | GMAN-I | mod-GAN | t 0.180 | GMAN-0 - â0.008 â0.041 â0.132 5) 0.122 GMAN* 0.008 - â0.038 â0.092 3} 0.010 | GMAN-1 0.041 0.038 - â0.089 | â0.313 | mod-GAN 0.132 0.092 0.089 - Table 5: Pairwise GMAM metric means for select models on CIFAR-10. For each column, a positive GMAM indicates better performance relative to the row opponent; negative implies worse. Scores are obtained by summing each column. GMAN variants were trained with five discriminators. | GMAN-1 | GMAN-0 | GMAN* | _mod-GAN Score [6.001 £0.194 | 5.957 £0.135 | | 1611.01673#37 | Generative Multi-Adversarial Networks | Generative adversarial networks (GANs) are a framework for producing a
generative model by way of a two-player minimax game. In this paper, we propose
the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that
extends GANs to multiple discriminators. In previous work, the successful
training of GANs requires modifying the minimax objective to accelerate
training early on. In contrast, GMAN can be reliably trained with the original,
untampered objective. We explore a number of design perspectives with the
discriminator role ranging from formidable adversary to forgiving teacher.
Image generation tasks comparing the proposed framework to standard GANs
demonstrate GMAN produces higher quality samples in a fraction of the
iterations when measured by a pairwise GAM-type metric. | http://arxiv.org/pdf/1611.01673 | Ishan Durugkar, Ian Gemp, Sridhar Mahadevan | cs.LG, cs.MA, cs.NE | Accepted as a conference paper (poster) at ICLR 2017 | null | cs.LG | 20161105 | 20170302 | [
{
"id": "1511.06390"
},
{
"id": "1511.05897"
},
{
"id": "1610.02920"
}
] |
1611.01576 | 38 | Jeffrey Pennington, Richard Socher, and Christopher D Manning. GloVe: Global vectors for word representation. In EMNLP, 2014.
MarcâAurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. Sequence level train- ing with recurrent neural networks. In ICLR, 2016.
Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4(2), 2012.
Seiya Tokui, Kenta Oono, and Shohei Hido. Chainer: A next-generation open source framework for deep learning.
Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759, 2016.
Sida Wang and Christopher D Manning. Baselines and bigrams: Simple, good sentiment and topic classiï¬cation. In ACL, 2012.
Xin Wang, Yuanchao Liu, Chengjie Sun, Baoxun Wang, and Xiaolong Wang. Predicting polarities of tweets by composing word embeddings with long short-term memory. In ACL, 2015. | 1611.01576#38 | Quasi-Recurrent Neural Networks | Recurrent neural networks are a powerful tool for modeling sequential data,
but the dependence of each timestep's computation on the previous timestep's
output limits parallelism and makes RNNs unwieldy for very long sequences. We
introduce quasi-recurrent neural networks (QRNNs), an approach to neural
sequence modeling that alternates convolutional layers, which apply in parallel
across timesteps, and a minimalist recurrent pooling function that applies in
parallel across channels. Despite lacking trainable recurrent layers, stacked
QRNNs have better predictive accuracy than stacked LSTMs of the same hidden
size. Due to their increased parallelism, they are up to 16 times faster at
train and test time. Experiments on language modeling, sentiment
classification, and character-level neural machine translation demonstrate
these advantages and underline the viability of QRNNs as a basic building block
for a variety of sequence tasks. | http://arxiv.org/pdf/1611.01576 | James Bradbury, Stephen Merity, Caiming Xiong, Richard Socher | cs.NE, cs.AI, cs.CL, cs.LG | Submitted to conference track at ICLR 2017 | null | cs.NE | 20161105 | 20161121 | [
{
"id": "1605.07725"
},
{
"id": "1508.06615"
},
{
"id": "1606.01305"
},
{
"id": "1610.10099"
},
{
"id": "1609.08144"
},
{
"id": "1602.00367"
},
{
"id": "1511.08630"
},
{
"id": "1609.07843"
},
{
"id": "1608.06993"
},
{
"id": "1610.03017"
},
{
"id": "1601.06759"
},
{
"id": "1606.02960"
}
] |
1611.01578 | 38 | In our experiment with the new cell, we make no change to the above settings except for dropping in the new cell and adjusting the hyperparameters so that the new model should have the same compu- tational complexity with the base model. The result shows that our cell, with the same computational complexity, achieves an improvement of 0.5 test set BLEU than the default LSTM cell. Though this improvement is not huge, the fact that the new cell can be used without any tuning on the existing GNMT framework is encouraging. We expect further tuning can help our cell perform better.
Control Experiment 1 â Adding more functions in the search space: To test the robustness of Neural Architecture Search, we add max to the list of combination functions and sin to the list of activation functions and rerun our experiments. The results show that even with a bigger search space, the model can achieve somewhat comparable performance. The best architecture with max and sin is shown in Figure 8 in Appendix A. | 1611.01578#38 | Neural Architecture Search with Reinforcement Learning | Neural networks are powerful and flexible models that work well for many
difficult learning tasks in image, speech and natural language understanding.
Despite their success, neural networks are still hard to design. In this paper,
we use a recurrent network to generate the model descriptions of neural
networks and train this RNN with reinforcement learning to maximize the
expected accuracy of the generated architectures on a validation set. On the
CIFAR-10 dataset, our method, starting from scratch, can design a novel network
architecture that rivals the best human-invented architecture in terms of test
set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is
0.09 percent better and 1.05x faster than the previous state-of-the-art model
that used a similar architectural scheme. On the Penn Treebank dataset, our
model can compose a novel recurrent cell that outperforms the widely-used LSTM
cell, and other state-of-the-art baselines. Our cell achieves a test set
perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than
the previous state-of-the-art model. The cell can also be transferred to the
character language modeling task on PTB and achieves a state-of-the-art
perplexity of 1.214. | http://arxiv.org/pdf/1611.01578 | Barret Zoph, Quoc V. Le | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20161105 | 20170215 | [
{
"id": "1611.01462"
},
{
"id": "1607.03474"
},
{
"id": "1603.05027"
},
{
"id": "1609.09106"
},
{
"id": "1511.06732"
},
{
"id": "1508.06615"
},
{
"id": "1606.04474"
},
{
"id": "1608.05859"
},
{
"id": "1609.08144"
},
{
"id": "1606.01885"
},
{
"id": "1505.00387"
},
{
"id": "1609.07843"
},
{
"id": "1512.05287"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1611.01600 | 38 | R. Pascanu and Y. Bengio. Revisiting natural gradient for deep networks. In ICLR, 2014.
R. Pascanu, T. Mikolov, and Y. Bengio. On the difï¬culty of training recurrent neural networks. In ICLR, pp. 1310â1318, 2013.
A. Rakotomamonjy, R. Flamary, and G. Gasso. DC proximal Newton for nonconvex optimization problems. IEEE Transactions on Neural Networks and Learning Systems, 27(3):636â647, 2016.
9
Published as a conference paper at ICLR 2017
M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi. XNOR-Net: ImageNet classiï¬cation using binary convolutional neural networks. In ECCV, 2016.
Theano Development Team. Theano: A Python framework for fast computation of mathematical expressions. arXiv e-prints, abs/1605.02688, May 2016. URL http://arxiv.org/abs/ 1605.02688.
T. Tieleman and G. Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude, 2012. | 1611.01600#38 | Loss-aware Binarization of Deep Networks | Deep neural network models, though very powerful and highly successful, are
computationally expensive in terms of space and time. Recently, there have been
a number of attempts on binarizing the network weights and activations. This
greatly reduces the network size, and replaces the underlying multiplications
to additions or even XNOR bit operations. However, existing binarization
schemes are based on simple matrix approximation and ignore the effect of
binarization on the loss. In this paper, we propose a proximal Newton algorithm
with diagonal Hessian approximation that directly minimizes the loss w.r.t. the
binarized weights. The underlying proximal step has an efficient closed-form
solution, and the second-order information can be efficiently obtained from the
second moments already computed by the Adam optimizer. Experiments on both
feedforward and recurrent networks show that the proposed loss-aware
binarization algorithm outperforms existing binarization schemes, and is also
more robust for wide and deep networks. | http://arxiv.org/pdf/1611.01600 | Lu Hou, Quanming Yao, James T. Kwok | cs.NE, cs.LG | null | null | cs.NE | 20161105 | 20180510 | [
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1502.04390"
}
] |
1611.01603 | 38 | 8
13, 9
Published as a conference paper at ICLR 2017
in the context that correspond to the correct answer. Then the loss function is computed from the summed probability. We use a minibatch size of 48 and train for 8 epochs, with early stop when the accuracy on validation data starts to drop. Inspired by the window-based method (Hill et al., 2016), we split each article into short sentences where each sentence is a 19-word window around each entity (hence the same word might appear in multiple sentences). The RNNs in BIDAF are not feed-forwarded or back-propagated across sentences, which speed up the training process by par- allelization. The entire training process takes roughly 60 hours on eight Titan X GPUs. The other hyper-parameters are identical to the model described in Section 4.
Results. The results of our single-run models and competing approaches on the CNN/DailyMail datasets are summarized in Table 3. â indicates ensemble methods. BIDAF outperforms previous single-run models on both datasets for both val and test data. On the DailyMail test, our single-run model even outperforms the best ensemble method. | 1611.01603#38 | Bidirectional Attention Flow for Machine Comprehension | Machine comprehension (MC), answering a query about a given context
paragraph, requires modeling complex interactions between the context and the
query. Recently, attention mechanisms have been successfully extended to MC.
Typically these methods use attention to focus on a small portion of the
context and summarize it with a fixed-size vector, couple attentions
temporally, and/or often form a uni-directional attention. In this paper we
introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage
hierarchical process that represents the context at different levels of
granularity and uses bi-directional attention flow mechanism to obtain a
query-aware context representation without early summarization. Our
experimental evaluations show that our model achieves the state-of-the-art
results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze
test. | http://arxiv.org/pdf/1611.01603 | Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi | cs.CL | Published as a conference paper at ICLR 2017 | null | cs.CL | 20161105 | 20180621 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1611.01604"
},
{
"id": "1609.05284"
},
{
"id": "1610.09996"
},
{
"id": "1606.01549"
},
{
"id": "1511.02274"
},
{
"id": "1505.00387"
},
{
"id": "1611.01436"
},
{
"id": "1611.01724"
},
{
"id": "1607.04423"
}
] |
1611.01626 | 38 | Figure 1b shows the performance traces of three different agents learning in the grid world, running from the same initial random seed. The lines show the true expected performance of the policy
8
Published as a conference paper at ICLR 2017
Q-learning . Q Policy XY, t TD learning gradient NS Policy / a, A Input
Figure 2: PGQL network augmentation.
from the start state, as calculated by value iteration after each update. The blue-line is standard TD-actor-critic (Konda & Tsitsiklis, 2003), where we maintain an estimate of the value function and use that to generate an estimate of the Q-values for use as the critic. The green line is Q-learning where at each step an update is performed using data drawn from a replay buffer of prior experience and where the Q-values are parameterized as in equation (12). The policy is a softmax over the Q-value estimates with temperature α. The red line is PGQL, which at each step ï¬rst performs the TD-actor-critic update, then performs the Q-learning update as in (14). | 1611.01626#38 | Combining policy gradient and Q-learning | Policy gradient is an efficient technique for improving a policy in a
reinforcement learning setting. However, vanilla online variants are on-policy
only and not able to take advantage of off-policy data. In this paper we
describe a new technique that combines policy gradient with off-policy
Q-learning, drawing experience from a replay buffer. This is motivated by
making a connection between the fixed points of the regularized policy gradient
algorithm and the Q-values. This connection allows us to estimate the Q-values
from the action preferences of the policy, to which we apply Q-learning
updates. We refer to the new technique as 'PGQL', for policy gradient and
Q-learning. We also establish an equivalency between action-value fitting
techniques and actor-critic algorithms, showing that regularized policy
gradient techniques can be interpreted as advantage function learning
algorithms. We conclude with some numerical examples that demonstrate improved
data efficiency and stability of PGQL. In particular, we tested PGQL on the
full suite of Atari games and achieved performance exceeding that of both
asynchronous advantage actor-critic (A3C) and Q-learning. | http://arxiv.org/pdf/1611.01626 | Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih | cs.LG, cs.AI, math.OC, stat.ML | null | null | cs.LG | 20161105 | 20170407 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1609.00150"
},
{
"id": "1512.04105"
},
{
"id": "1511.05952"
},
{
"id": "1504.00702"
}
] |
1611.01673 | 38 | | GMAN-1 | GMAN-0 | GMAN* | _mod-GAN Score [6.001 £0.194 | 5.957 £0.135 | 5.955 £0.153 | 5.738 £0.176 Table 6: Inception score means with standard deviations for select models on CIFAR-10. Higher scores are better. GMAN variants were trained with five discriminators. ERE Bae eae HeEBCA Bae PRREe Bee Ea Ga Ge ba ee Ee T= |S] esl | 1 Discriminator 5 discriminator GMAN* 5 discriminator GMAN -0 Figure 14: Sample of pictures generated on CelebA cropped dataset. 12 Score | Variant _| GMAN* | GMAN-I | GAN_ | GMAN-0 | GMAN-max | mod-GAN 0.184 GMAN* - â0.007 | â0.040 | â0.020 â0.028 â0.089 0.067 GMAN-1 0.007 - â0.008 | â0.008 â0.021 â0.037 tT 0.030 GAN 0.040 0.008 - 0.002 â0.018 â0.058 2} 0.005 GMAN-O 0.020 0.008 0.002 - | 1611.01673#38 | Generative Multi-Adversarial Networks | Generative adversarial networks (GANs) are a framework for producing a
generative model by way of a two-player minimax game. In this paper, we propose
the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that
extends GANs to multiple discriminators. In previous work, the successful
training of GANs requires modifying the minimax objective to accelerate
training early on. In contrast, GMAN can be reliably trained with the original,
untampered objective. We explore a number of design perspectives with the
discriminator role ranging from formidable adversary to forgiving teacher.
Image generation tasks comparing the proposed framework to standard GANs
demonstrate GMAN produces higher quality samples in a fraction of the
iterations when measured by a pairwise GAM-type metric. | http://arxiv.org/pdf/1611.01673 | Ishan Durugkar, Ian Gemp, Sridhar Mahadevan | cs.LG, cs.MA, cs.NE | Accepted as a conference paper (poster) at ICLR 2017 | null | cs.LG | 20161105 | 20170302 | [
{
"id": "1511.06390"
},
{
"id": "1511.05897"
},
{
"id": "1610.02920"
}
] |
1611.01576 | 39 | Sam Wiseman and Alexander M Rush. Sequence-to-sequence learning as beam-search optimization. arXiv preprint arXiv:1606.02960, 2016.
Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Googleâs neural machine trans- arXiv preprint lation system: Bridging the gap between human and machine translation. arXiv:1609.08144, 2016.
Yijun Xiao and Kyunghyun Cho. Efï¬cient character-level document classiï¬cation by combining convolution and recurrent layers. arXiv preprint arXiv:1602.00367, 2016.
Caiming Xiong, Stephen Merity, and Richard Socher. Dynamic memory networks for visual and textual question answering. In ICML, 2016.
Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. Recurrent neural network regularization. arXiv preprint arXiv:1409.2329, 2014.
Xiang Zhang, Junbo Zhao, and Yann LeCun. Character-level convolutional networks for text clas- siï¬cation. In NIPS, 2015. | 1611.01576#39 | Quasi-Recurrent Neural Networks | Recurrent neural networks are a powerful tool for modeling sequential data,
but the dependence of each timestep's computation on the previous timestep's
output limits parallelism and makes RNNs unwieldy for very long sequences. We
introduce quasi-recurrent neural networks (QRNNs), an approach to neural
sequence modeling that alternates convolutional layers, which apply in parallel
across timesteps, and a minimalist recurrent pooling function that applies in
parallel across channels. Despite lacking trainable recurrent layers, stacked
QRNNs have better predictive accuracy than stacked LSTMs of the same hidden
size. Due to their increased parallelism, they are up to 16 times faster at
train and test time. Experiments on language modeling, sentiment
classification, and character-level neural machine translation demonstrate
these advantages and underline the viability of QRNNs as a basic building block
for a variety of sequence tasks. | http://arxiv.org/pdf/1611.01576 | James Bradbury, Stephen Merity, Caiming Xiong, Richard Socher | cs.NE, cs.AI, cs.CL, cs.LG | Submitted to conference track at ICLR 2017 | null | cs.NE | 20161105 | 20161121 | [
{
"id": "1605.07725"
},
{
"id": "1508.06615"
},
{
"id": "1606.01305"
},
{
"id": "1610.10099"
},
{
"id": "1609.08144"
},
{
"id": "1602.00367"
},
{
"id": "1511.08630"
},
{
"id": "1609.07843"
},
{
"id": "1608.06993"
},
{
"id": "1610.03017"
},
{
"id": "1601.06759"
},
{
"id": "1606.02960"
}
] |
1611.01578 | 39 | Control Experiment 2 â Comparison against Random Search: Instead of policy gradient, one can use random search to ï¬nd the best network. Although this baseline seems simple, it is often very hard to surpass (Bergstra & Bengio, 2012). We report the perplexity improvements using policy gradient against random search as training progresses in Figure 6. The results show that not only the best model using policy gradient is better than the best model using random search, but also the average of top models is also much better.
@â* Top_1_unique_models as||â* Top_5_unique_models e* Top_15_unique_models Perplexity Improvement 0 5000 70000 T5000 20000 725000 Iteration
Figure 6: Improvement of Neural Architecture Search over random search over time. We plot the difference between the average of the top k models our controller ï¬nds vs. random search every 400 models run.
10
# Under review as a conference paper at ICLR 2017
# 5 CONCLUSION | 1611.01578#39 | Neural Architecture Search with Reinforcement Learning | Neural networks are powerful and flexible models that work well for many
difficult learning tasks in image, speech and natural language understanding.
Despite their success, neural networks are still hard to design. In this paper,
we use a recurrent network to generate the model descriptions of neural
networks and train this RNN with reinforcement learning to maximize the
expected accuracy of the generated architectures on a validation set. On the
CIFAR-10 dataset, our method, starting from scratch, can design a novel network
architecture that rivals the best human-invented architecture in terms of test
set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is
0.09 percent better and 1.05x faster than the previous state-of-the-art model
that used a similar architectural scheme. On the Penn Treebank dataset, our
model can compose a novel recurrent cell that outperforms the widely-used LSTM
cell, and other state-of-the-art baselines. Our cell achieves a test set
perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than
the previous state-of-the-art model. The cell can also be transferred to the
character language modeling task on PTB and achieves a state-of-the-art
perplexity of 1.214. | http://arxiv.org/pdf/1611.01578 | Barret Zoph, Quoc V. Le | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20161105 | 20170215 | [
{
"id": "1611.01462"
},
{
"id": "1607.03474"
},
{
"id": "1603.05027"
},
{
"id": "1609.09106"
},
{
"id": "1511.06732"
},
{
"id": "1508.06615"
},
{
"id": "1606.04474"
},
{
"id": "1608.05859"
},
{
"id": "1609.08144"
},
{
"id": "1606.01885"
},
{
"id": "1505.00387"
},
{
"id": "1609.07843"
},
{
"id": "1512.05287"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1611.01600 | 39 | T. Tieleman and G. Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude, 2012.
A.L. Yuille and A. Rangarajan. The concave-convex procedure (CCCP). NIPS, 2:1033â1040, 2002.
M.D. Zeiler. ADADELTA: An adaptive learning rate method. Technical Report arXiv:1212.5701, 2012.
S. Zhou, Z. Ni, X. Zhou, H. Wen, Y. Wu, and Y. Zou. DoReFa-Net: Training low bitwidth convolu- tional neural networks with low bitwidth gradients. Technical Report arXiv:1606.06160, 2016.
10
Published as a conference paper at ICLR 2017
# A PROOF OF PROPOSITION 3.1
Denote ||x||¢ = x' Qx,
Ve(w't)" (we! -wh) 4 Sw _ wh) Dl w! âw') L 1 . te te _ = 5 tet â 1_ Vyew') @ at Yilbeo +e l=1 1 L = 5 Iw; â willbe +e l=1 nm = SOV la eattbsle â why? +e, [=I i=1 | 1611.01600#39 | Loss-aware Binarization of Deep Networks | Deep neural network models, though very powerful and highly successful, are
computationally expensive in terms of space and time. Recently, there have been
a number of attempts on binarizing the network weights and activations. This
greatly reduces the network size, and replaces the underlying multiplications
to additions or even XNOR bit operations. However, existing binarization
schemes are based on simple matrix approximation and ignore the effect of
binarization on the loss. In this paper, we propose a proximal Newton algorithm
with diagonal Hessian approximation that directly minimizes the loss w.r.t. the
binarized weights. The underlying proximal step has an efficient closed-form
solution, and the second-order information can be efficiently obtained from the
second moments already computed by the Adam optimizer. Experiments on both
feedforward and recurrent networks show that the proposed loss-aware
binarization algorithm outperforms existing binarization schemes, and is also
more robust for wide and deep networks. | http://arxiv.org/pdf/1611.01600 | Lu Hou, Quanming Yao, James T. Kwok | cs.NE, cs.LG | null | null | cs.NE | 20161105 | 20180510 | [
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1502.04390"
}
] |
1611.01603 | 39 | Attentive Reader (Hermann et al., 2015) MemNN (Hill et al., 2016) AS Reader (Kadlec et al., 2016) DER Network (Kobayashi et al., 2016) Iterative Attention (Sordoni et al., 2016) EpiReader (Trischler et al., 2016) Stanford AR (Chen et al., 2016) GAReader (Dhingra et al., 2016) AoA Reader (Cui et al., 2016) ReasoNet (Shen et al., 2016) BIDAF (Ours) MemNNâ (Hill et al., 2016) ASReaderâ (Kadlec et al., 2016) Iterative Attentionâ (Sordoni et al., 2016) GA Readerâ (Dhingra et al., 2016) Stanford ARâ (Chen et al., 2016) CNN val 61.6 63.4 68.6 71.3 72.6 73.4 73.8 73.0 73.1 72.9 76.3 66.2 73.9 74.5 76.4 77.2 test 63.0 6.8 69.5 72.9 73.3 74.0 73.6 73.8 74.4 74.7 76.9 69.4 75.4 75.7 77.4 77.6 DailyMail test val 69.0 | 1611.01603#39 | Bidirectional Attention Flow for Machine Comprehension | Machine comprehension (MC), answering a query about a given context
paragraph, requires modeling complex interactions between the context and the
query. Recently, attention mechanisms have been successfully extended to MC.
Typically these methods use attention to focus on a small portion of the
context and summarize it with a fixed-size vector, couple attentions
temporally, and/or often form a uni-directional attention. In this paper we
introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage
hierarchical process that represents the context at different levels of
granularity and uses bi-directional attention flow mechanism to obtain a
query-aware context representation without early summarization. Our
experimental evaluations show that our model achieves the state-of-the-art
results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze
test. | http://arxiv.org/pdf/1611.01603 | Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi | cs.CL | Published as a conference paper at ICLR 2017 | null | cs.CL | 20161105 | 20180621 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1611.01604"
},
{
"id": "1609.05284"
},
{
"id": "1610.09996"
},
{
"id": "1606.01549"
},
{
"id": "1511.02274"
},
{
"id": "1505.00387"
},
{
"id": "1611.01436"
},
{
"id": "1611.01724"
},
{
"id": "1607.04423"
}
] |
1611.01626 | 39 | The grid world was totally deterministic, so the step size could be large and was chosen to be 1. A step-size any larger than this made the pure actor-critic agent fail to learn, but both PGQL and Q-learning could handle some increase in the step-size, possibly due to the stabilizing effect of using replay.
It is clear that PGQL outperforms the other two. At any point along the x-axis the agents have seen the same amount of data, which would indicate that PGQL is more data efï¬cient than either of the vanilla methods since it has the highest performance at practically every point.
# 5.2 ATARI | 1611.01626#39 | Combining policy gradient and Q-learning | Policy gradient is an efficient technique for improving a policy in a
reinforcement learning setting. However, vanilla online variants are on-policy
only and not able to take advantage of off-policy data. In this paper we
describe a new technique that combines policy gradient with off-policy
Q-learning, drawing experience from a replay buffer. This is motivated by
making a connection between the fixed points of the regularized policy gradient
algorithm and the Q-values. This connection allows us to estimate the Q-values
from the action preferences of the policy, to which we apply Q-learning
updates. We refer to the new technique as 'PGQL', for policy gradient and
Q-learning. We also establish an equivalency between action-value fitting
techniques and actor-critic algorithms, showing that regularized policy
gradient techniques can be interpreted as advantage function learning
algorithms. We conclude with some numerical examples that demonstrate improved
data efficiency and stability of PGQL. In particular, we tested PGQL on the
full suite of Atari games and achieved performance exceeding that of both
asynchronous advantage actor-critic (A3C) and Q-learning. | http://arxiv.org/pdf/1611.01626 | Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih | cs.LG, cs.AI, math.OC, stat.ML | null | null | cs.LG | 20161105 | 20170407 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1609.00150"
},
{
"id": "1512.04105"
},
{
"id": "1511.05952"
},
{
"id": "1504.00702"
}
] |
1611.01673 | 39 | GAN 0.040 0.008 - 0.002 â0.018 â0.058 2} 0.005 GMAN-O 0.020 0.008 0.002 - â0.013 â0.018 | â0.091 | GMAN-max 0.028 0.021 0.018 0.013 - â0.011 â0.213 | mod-GAN 0.089 0.037 0.058 0.018 0.011 Score | Variant _| GMAN* | GMAN-I | GAN_ | GMAN-0 | GMAN-max | mod-GAN 0.184 GMAN* - â0.007 | â0.040 | â0.020 â0.028 â0.089 0.067 GMAN-1 0.007 - â0.008 | â0.008 â0.021 â0.037 tT 0.030 GAN 0.040 0.008 - 0.002 â0.018 â0.058 2} 0.005 GMAN-O 0.020 0.008 0.002 - â0.013 â0.018 | â0.091 | GMAN-max 0.028 0.021 0.018 0.013 - â0.011 â0.213 | mod-GAN 0.089 0.037 0.058 0.018 0.011 | Score | | 1611.01673#39 | Generative Multi-Adversarial Networks | Generative adversarial networks (GANs) are a framework for producing a
generative model by way of a two-player minimax game. In this paper, we propose
the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that
extends GANs to multiple discriminators. In previous work, the successful
training of GANs requires modifying the minimax objective to accelerate
training early on. In contrast, GMAN can be reliably trained with the original,
untampered objective. We explore a number of design perspectives with the
discriminator role ranging from formidable adversary to forgiving teacher.
Image generation tasks comparing the proposed framework to standard GANs
demonstrate GMAN produces higher quality samples in a fraction of the
iterations when measured by a pairwise GAM-type metric. | http://arxiv.org/pdf/1611.01673 | Ishan Durugkar, Ian Gemp, Sridhar Mahadevan | cs.LG, cs.MA, cs.NE | Accepted as a conference paper (poster) at ICLR 2017 | null | cs.LG | 20161105 | 20170302 | [
{
"id": "1511.06390"
},
{
"id": "1511.05897"
},
{
"id": "1610.02920"
}
] |
1611.01576 | 40 | Xiang Zhang, Junbo Zhao, and Yann LeCun. Character-level convolutional networks for text clas- siï¬cation. In NIPS, 2015.
Chunting Zhou, Chonglin Sun, Zhiyuan Liu, and Francis Lau. A C-LSTM neural network for text classiï¬cation. arXiv preprint arXiv:1511.08630, 2015.
10
# Under review as a conference paper at ICLR 2017
# APPENDIX
BEAM SEARCH RANKING CRITERION
The modiï¬ed log-probability ranking criterion we used in beam search for translation experiments is:
T+a Tirg + log(Peana) = TT. tre T Ss log(p(wi|w1 ... wi-1)), (9) i=l
where α is a length normalization parameter (Wu et al., 2016), wi is the ith output character, and Ttrg is a âtarget lengthâ equal to the source sentence length plus ï¬ve characters. This reduces at α = 0 to ordinary beam search with probabilities:
T log(Peana) = > log(p(wilwr -.. wi-1)), (10) i=1
and at α = 1 to beam search with probabilities normalized by length (up to the target length): | 1611.01576#40 | Quasi-Recurrent Neural Networks | Recurrent neural networks are a powerful tool for modeling sequential data,
but the dependence of each timestep's computation on the previous timestep's
output limits parallelism and makes RNNs unwieldy for very long sequences. We
introduce quasi-recurrent neural networks (QRNNs), an approach to neural
sequence modeling that alternates convolutional layers, which apply in parallel
across timesteps, and a minimalist recurrent pooling function that applies in
parallel across channels. Despite lacking trainable recurrent layers, stacked
QRNNs have better predictive accuracy than stacked LSTMs of the same hidden
size. Due to their increased parallelism, they are up to 16 times faster at
train and test time. Experiments on language modeling, sentiment
classification, and character-level neural machine translation demonstrate
these advantages and underline the viability of QRNNs as a basic building block
for a variety of sequence tasks. | http://arxiv.org/pdf/1611.01576 | James Bradbury, Stephen Merity, Caiming Xiong, Richard Socher | cs.NE, cs.AI, cs.CL, cs.LG | Submitted to conference track at ICLR 2017 | null | cs.NE | 20161105 | 20161121 | [
{
"id": "1605.07725"
},
{
"id": "1508.06615"
},
{
"id": "1606.01305"
},
{
"id": "1610.10099"
},
{
"id": "1609.08144"
},
{
"id": "1602.00367"
},
{
"id": "1511.08630"
},
{
"id": "1609.07843"
},
{
"id": "1608.06993"
},
{
"id": "1610.03017"
},
{
"id": "1601.06759"
},
{
"id": "1606.02960"
}
] |
1611.01578 | 40 | 10
# Under review as a conference paper at ICLR 2017
# 5 CONCLUSION
In this paper we introduce Neural Architecture Search, an idea of using a recurrent neural network to compose neural network architectures. By using recurrent network as the controller, our method is ï¬exible so that it can search variable-length architecture space. Our method has strong empirical per- formance on very challenging benchmarks and presents a new research direction for automatically ï¬nding good neural network architectures. The code for running the models found by the controller on CIFAR-10 and PTB will be released at https://github.com/tensorï¬ow/models . Additionally, we have added the RNN cell found using our method under the name NASCell into TensorFlow, so others can easily use it.
ACKNOWLEDGMENTS
We thank Greg Corrado, Jeff Dean, David Ha, Lukasz Kaiser and the Google Brain team for their help with the project.
# REFERENCES
Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. Learning to compose neural networks for question answering. In NAACL, 2016.
Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W Hoffman, David Pfau, Tom Schaul, and Nando de Freitas. Learning to learn by gradient descent by gradient descent. arXiv preprint arXiv:1606.04474, 2016. | 1611.01578#40 | Neural Architecture Search with Reinforcement Learning | Neural networks are powerful and flexible models that work well for many
difficult learning tasks in image, speech and natural language understanding.
Despite their success, neural networks are still hard to design. In this paper,
we use a recurrent network to generate the model descriptions of neural
networks and train this RNN with reinforcement learning to maximize the
expected accuracy of the generated architectures on a validation set. On the
CIFAR-10 dataset, our method, starting from scratch, can design a novel network
architecture that rivals the best human-invented architecture in terms of test
set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is
0.09 percent better and 1.05x faster than the previous state-of-the-art model
that used a similar architectural scheme. On the Penn Treebank dataset, our
model can compose a novel recurrent cell that outperforms the widely-used LSTM
cell, and other state-of-the-art baselines. Our cell achieves a test set
perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than
the previous state-of-the-art model. The cell can also be transferred to the
character language modeling task on PTB and achieves a state-of-the-art
perplexity of 1.214. | http://arxiv.org/pdf/1611.01578 | Barret Zoph, Quoc V. Le | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20161105 | 20170215 | [
{
"id": "1611.01462"
},
{
"id": "1607.03474"
},
{
"id": "1603.05027"
},
{
"id": "1609.09106"
},
{
"id": "1511.06732"
},
{
"id": "1508.06615"
},
{
"id": "1606.04474"
},
{
"id": "1608.05859"
},
{
"id": "1609.08144"
},
{
"id": "1606.01885"
},
{
"id": "1505.00387"
},
{
"id": "1609.07843"
},
{
"id": "1512.05287"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1611.01600 | 40 | nm = SOV [=I i=1 â3||Vie(w'1) Odi" have bj = sign(w/).
where c, = â3||Vie(w'1) 1,2,...,L, we have bj nt
where c, = â3||Vie(w'1) Odi" Rea is independent of aj and bj. Since aj > 0,d} > 0,Vl = L 1,2,...,L, we have bj = sign(w/). Moreover,
nt nt SLD la talib ih: â wily? er = FSD a Caf ~ lft)? +e l=1 i=1 l=1 i=1 L l=1 \Idj~* |] (a7)? = ||dj? © wi |l1aj + ce, NlR
is lla; âow? =I lay" Ih
1 where c2 = ¢; + $||dj~' © w} © w}||1. Thus, the optimal af is lla; âow? ha =I lay" Ih
.
# B PROOF OF THEOREM 3.1 | 1611.01600#40 | Loss-aware Binarization of Deep Networks | Deep neural network models, though very powerful and highly successful, are
computationally expensive in terms of space and time. Recently, there have been
a number of attempts on binarizing the network weights and activations. This
greatly reduces the network size, and replaces the underlying multiplications
to additions or even XNOR bit operations. However, existing binarization
schemes are based on simple matrix approximation and ignore the effect of
binarization on the loss. In this paper, we propose a proximal Newton algorithm
with diagonal Hessian approximation that directly minimizes the loss w.r.t. the
binarized weights. The underlying proximal step has an efficient closed-form
solution, and the second-order information can be efficiently obtained from the
second moments already computed by the Adam optimizer. Experiments on both
feedforward and recurrent networks show that the proposed loss-aware
binarization algorithm outperforms existing binarization schemes, and is also
more robust for wide and deep networks. | http://arxiv.org/pdf/1611.01600 | Lu Hou, Quanming Yao, James T. Kwok | cs.NE, cs.LG | null | null | cs.NE | 20161105 | 20180510 | [
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1502.04390"
}
] |
1611.01626 | 40 | We tested our algorithm on the full suite of Atari benchmarks (Bellemare et al., 2012), using a neural network to parameterize the policy. In ï¬gure 2 we show how a policy network can be augmented with a parameterless additional layer which outputs the Q-value estimate. With the exception of the extra layer, the architecture and parameters were chosen to exactly match the asynchronous advantage actor-critic (A3C) algorithm presented in Mnih et al. (2016), which in turn reused many of the settings from Mnih et al. (2015). Speciï¬cally we used the exact same learning rate, number of workers, entropy penalty, bootstrap horizon, and network architecture. This allows a fair comparison between A3C and PGQL, since the only difference is the addition of the Q-learning step. Our technique augmented A3C with the following change: After each actor-learner has accumulated the gradient for the policy update, it performs a single step of Q-learning from replay data as described in equation (13), where the minibatch size was 32 and the Q-learning learning rate was chosen to be 0.5 times the actor-critic learning rate (we mention learning rate | 1611.01626#40 | Combining policy gradient and Q-learning | Policy gradient is an efficient technique for improving a policy in a
reinforcement learning setting. However, vanilla online variants are on-policy
only and not able to take advantage of off-policy data. In this paper we
describe a new technique that combines policy gradient with off-policy
Q-learning, drawing experience from a replay buffer. This is motivated by
making a connection between the fixed points of the regularized policy gradient
algorithm and the Q-values. This connection allows us to estimate the Q-values
from the action preferences of the policy, to which we apply Q-learning
updates. We refer to the new technique as 'PGQL', for policy gradient and
Q-learning. We also establish an equivalency between action-value fitting
techniques and actor-critic algorithms, showing that regularized policy
gradient techniques can be interpreted as advantage function learning
algorithms. We conclude with some numerical examples that demonstrate improved
data efficiency and stability of PGQL. In particular, we tested PGQL on the
full suite of Atari games and achieved performance exceeding that of both
asynchronous advantage actor-critic (A3C) and Q-learning. | http://arxiv.org/pdf/1611.01626 | Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih | cs.LG, cs.AI, math.OC, stat.ML | null | null | cs.LG | 20161105 | 20170407 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1609.00150"
},
{
"id": "1512.04105"
},
{
"id": "1511.05952"
},
{
"id": "1504.00702"
}
] |
1611.01673 | 40 | 0.018 0.013 - â0.011 â0.213 | mod-GAN 0.089 0.037 0.058 0.018 0.011 | Score | Variant | GMAN-O | GMAN-1 | GMAN* | mod-GAN | t 0.172 | GMAN-0 - â0.022 â0.062 â0.088 5 | 0.050 | GMAN-1 0.022 - 0.006 â0.078 3 | â0.055 | GMAN* 0.062 â0.006 - â0.001 | â0.167 | mod-GAN 0.088 0.078 0.001 | Score | Variant | GMAN-0 | GMAN* | GMAN-I | mod-GAN | t 0.180 | GMAN-0 - â0.008 â0.041 â0.132 5) 0.122 GMAN* 0.008 - â0.038 â0.092 3} 0.010 | GMAN-1 0.041 0.038 - â0.089 | â0.313 | mod-GAN 0.132 0.092 0.089 | GMAN-1 | GMAN-0 | GMAN* | _mod-GAN Score [6.001 £0.194 | 5.957 £0.135 | 5.955 | 1611.01673#40 | Generative Multi-Adversarial Networks | Generative adversarial networks (GANs) are a framework for producing a
generative model by way of a two-player minimax game. In this paper, we propose
the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that
extends GANs to multiple discriminators. In previous work, the successful
training of GANs requires modifying the minimax objective to accelerate
training early on. In contrast, GMAN can be reliably trained with the original,
untampered objective. We explore a number of design perspectives with the
discriminator role ranging from formidable adversary to forgiving teacher.
Image generation tasks comparing the proposed framework to standard GANs
demonstrate GMAN produces higher quality samples in a fraction of the
iterations when measured by a pairwise GAM-type metric. | http://arxiv.org/pdf/1611.01673 | Ishan Durugkar, Ian Gemp, Sridhar Mahadevan | cs.LG, cs.MA, cs.NE | Accepted as a conference paper (poster) at ICLR 2017 | null | cs.LG | 20161105 | 20170302 | [
{
"id": "1511.06390"
},
{
"id": "1511.05897"
},
{
"id": "1610.02920"
}
] |
1611.01578 | 41 | Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In ICLR, 2015.
Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Jauvin. A neural probabilistic language model. JMLR, 2003.
James Bergstra and Yoshua Bengio. Random search for hyper-parameter optimization. JMLR, 2012.
James Bergstra, R´emi Bardenet, Yoshua Bengio, and Bal´azs K´egl. Algorithms for hyper-parameter optimization. In NIPS, 2011.
James Bergstra, Daniel Yamins, and David D Cox. Making a science of model search: Hyperpa- rameter optimization in hundreds of dimensions for vision architectures. ICML, 2013.
Alan W. Biermann. The inference of regular LISP programs from examples. IEEE transactions on Systems, Man, and Cybernetics, 1978.
Wei-Chen Cheng, Stanley Kok, Hoai Vu Pham, Hai Leong Chieu, and Kian Ming Adam Chai. Language modeling with sum-product networks. In INTERSPEECH, 2014.
Navneet Dalal and Bill Triggs. Histograms of oriented gradients for human detection. In CVPR, 2005. | 1611.01578#41 | Neural Architecture Search with Reinforcement Learning | Neural networks are powerful and flexible models that work well for many
difficult learning tasks in image, speech and natural language understanding.
Despite their success, neural networks are still hard to design. In this paper,
we use a recurrent network to generate the model descriptions of neural
networks and train this RNN with reinforcement learning to maximize the
expected accuracy of the generated architectures on a validation set. On the
CIFAR-10 dataset, our method, starting from scratch, can design a novel network
architecture that rivals the best human-invented architecture in terms of test
set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is
0.09 percent better and 1.05x faster than the previous state-of-the-art model
that used a similar architectural scheme. On the Penn Treebank dataset, our
model can compose a novel recurrent cell that outperforms the widely-used LSTM
cell, and other state-of-the-art baselines. Our cell achieves a test set
perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than
the previous state-of-the-art model. The cell can also be transferred to the
character language modeling task on PTB and achieves a state-of-the-art
perplexity of 1.214. | http://arxiv.org/pdf/1611.01578 | Barret Zoph, Quoc V. Le | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20161105 | 20170215 | [
{
"id": "1611.01462"
},
{
"id": "1607.03474"
},
{
"id": "1603.05027"
},
{
"id": "1609.09106"
},
{
"id": "1511.06732"
},
{
"id": "1508.06615"
},
{
"id": "1606.04474"
},
{
"id": "1608.05859"
},
{
"id": "1609.08144"
},
{
"id": "1606.01885"
},
{
"id": "1505.00387"
},
{
"id": "1609.07843"
},
{
"id": "1512.05287"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1611.01600 | 41 | .
# B PROOF OF THEOREM 3.1
Let a = [a{...,a4]", and denote the objective in (3) by F(w, a). As w' is the minimizer in (6. we have
L(w') + Ve(w'1)T (wt _ wl) +4 Sw" _ wh) Tp lw _ wl) < e(w'-!), (9) From Assumption Al, we have
Ow") < e¢w'1) + Vem) T Ww! â wht) + 3 jet â wh (10)
Using (9) and (10), we obtain
ew) < L(w') _ Sw" _ wt )T(pt! _ BI)(w _ wt) ming. ({d) "Jn â 8) atâ-1 2 atâ1)|2 < ew) 5 wi -w' ll; Let c3 = mink,l,t([dtâ1
# l
# "Je
]k â β) > 0. Then,
(wt) < e(wt®) â F |p wha (ll) | 1611.01600#41 | Loss-aware Binarization of Deep Networks | Deep neural network models, though very powerful and highly successful, are
computationally expensive in terms of space and time. Recently, there have been
a number of attempts on binarizing the network weights and activations. This
greatly reduces the network size, and replaces the underlying multiplications
to additions or even XNOR bit operations. However, existing binarization
schemes are based on simple matrix approximation and ignore the effect of
binarization on the loss. In this paper, we propose a proximal Newton algorithm
with diagonal Hessian approximation that directly minimizes the loss w.r.t. the
binarized weights. The underlying proximal step has an efficient closed-form
solution, and the second-order information can be efficiently obtained from the
second moments already computed by the Adam optimizer. Experiments on both
feedforward and recurrent networks show that the proposed loss-aware
binarization algorithm outperforms existing binarization schemes, and is also
more robust for wide and deep networks. | http://arxiv.org/pdf/1611.01600 | Lu Hou, Quanming Yao, James T. Kwok | cs.NE, cs.LG | null | null | cs.NE | 20161105 | 20180510 | [
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1502.04390"
}
] |
1611.01603 | 41 | Table 3: Results on CNN/DailyMail datasets. We also include the results of previous ensemble methods (marked with â) for completeness.
# 6 CONCLUSION
In this paper, we introduce BIDAF, a multi-stage hierarchical process that represents the context at different levels of granularity and uses a bi-directional attention ï¬ow mechanism to achieve a query- aware context representation without early summarization. The experimental evaluations show that our model achieves the state-of-the-art results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze test. The ablation analyses demonstrate the importance of each compo- nent in our model. The visualizations and discussions show that our model is learning a suitable representation for MC and is capable of answering complex questions by attending to correct loca- tions in the given paragraph. Future work involves extending our approach to incorporate multiple hops of the attention layer.
ACKNOWLEDGMENTS
This research was supported by the NSF (IIS 1616112), NSF (III 1703166), Allen Institute for AI (66-9175), Allen Distinguished Investigator Award, Google Research Faculty Award, and Samsung GRO Award. We thank the anonymous reviewers for their helpful comments.
9
Published as a conference paper at ICLR 2017
# REFERENCES | 1611.01603#41 | Bidirectional Attention Flow for Machine Comprehension | Machine comprehension (MC), answering a query about a given context
paragraph, requires modeling complex interactions between the context and the
query. Recently, attention mechanisms have been successfully extended to MC.
Typically these methods use attention to focus on a small portion of the
context and summarize it with a fixed-size vector, couple attentions
temporally, and/or often form a uni-directional attention. In this paper we
introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage
hierarchical process that represents the context at different levels of
granularity and uses bi-directional attention flow mechanism to obtain a
query-aware context representation without early summarization. Our
experimental evaluations show that our model achieves the state-of-the-art
results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze
test. | http://arxiv.org/pdf/1611.01603 | Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi | cs.CL | Published as a conference paper at ICLR 2017 | null | cs.CL | 20161105 | 20180621 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1611.01604"
},
{
"id": "1609.05284"
},
{
"id": "1610.09996"
},
{
"id": "1606.01549"
},
{
"id": "1511.02274"
},
{
"id": "1505.00387"
},
{
"id": "1611.01436"
},
{
"id": "1611.01724"
},
{
"id": "1607.04423"
}
] |
1611.01626 | 41 | (13), where the minibatch size was 32 and the Q-learning learning rate was chosen to be 0.5 times the actor-critic learning rate (we mention learning rate ratios rather than choice of η in (14) because the updates happen at different frequencies and from different data sources). Each actor-learner thread maintained a replay buffer of the last 100k transitions seen by that thread. We ran the learning for 50 million agent steps (200 million Atari frames), as in (Mnih et al., 2016). | 1611.01626#41 | Combining policy gradient and Q-learning | Policy gradient is an efficient technique for improving a policy in a
reinforcement learning setting. However, vanilla online variants are on-policy
only and not able to take advantage of off-policy data. In this paper we
describe a new technique that combines policy gradient with off-policy
Q-learning, drawing experience from a replay buffer. This is motivated by
making a connection between the fixed points of the regularized policy gradient
algorithm and the Q-values. This connection allows us to estimate the Q-values
from the action preferences of the policy, to which we apply Q-learning
updates. We refer to the new technique as 'PGQL', for policy gradient and
Q-learning. We also establish an equivalency between action-value fitting
techniques and actor-critic algorithms, showing that regularized policy
gradient techniques can be interpreted as advantage function learning
algorithms. We conclude with some numerical examples that demonstrate improved
data efficiency and stability of PGQL. In particular, we tested PGQL on the
full suite of Atari games and achieved performance exceeding that of both
asynchronous advantage actor-critic (A3C) and Q-learning. | http://arxiv.org/pdf/1611.01626 | Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih | cs.LG, cs.AI, math.OC, stat.ML | null | null | cs.LG | 20161105 | 20170407 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1609.00150"
},
{
"id": "1512.04105"
},
{
"id": "1511.05952"
},
{
"id": "1504.00702"
}
] |
1611.01578 | 42 | Navneet Dalal and Bill Triggs. Histograms of oriented gradients for human detection. In CVPR, 2005.
Jeffrey Dean, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Mark Mao, Andrew Senior, Paul Tucker, Ke Yang, Quoc V. Le, et al. Large scale distributed deep networks. In NIPS, 2012.
Dario Floreano, Peter D¨urr, and Claudio Mattiussi. Neuroevolution: from architectures to learning. Evolutionary Intelligence, 2008.
Yarin Gal. A theoretically grounded application of dropout in recurrent neural networks. arXiv preprint arXiv:1512.05287, 2015.
David Ha, Andrew Dai, and Quoc V. Le. Hypernetworks. arXiv preprint arXiv:1609.09106, 2016.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. In CVPR, 2016a.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. arXiv preprint arXiv:1603.05027, 2016b.
11
# Under review as a conference paper at ICLR 2017 | 1611.01578#42 | Neural Architecture Search with Reinforcement Learning | Neural networks are powerful and flexible models that work well for many
difficult learning tasks in image, speech and natural language understanding.
Despite their success, neural networks are still hard to design. In this paper,
we use a recurrent network to generate the model descriptions of neural
networks and train this RNN with reinforcement learning to maximize the
expected accuracy of the generated architectures on a validation set. On the
CIFAR-10 dataset, our method, starting from scratch, can design a novel network
architecture that rivals the best human-invented architecture in terms of test
set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is
0.09 percent better and 1.05x faster than the previous state-of-the-art model
that used a similar architectural scheme. On the Penn Treebank dataset, our
model can compose a novel recurrent cell that outperforms the widely-used LSTM
cell, and other state-of-the-art baselines. Our cell achieves a test set
perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than
the previous state-of-the-art model. The cell can also be transferred to the
character language modeling task on PTB and achieves a state-of-the-art
perplexity of 1.214. | http://arxiv.org/pdf/1611.01578 | Barret Zoph, Quoc V. Le | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20161105 | 20170215 | [
{
"id": "1611.01462"
},
{
"id": "1607.03474"
},
{
"id": "1603.05027"
},
{
"id": "1609.09106"
},
{
"id": "1511.06732"
},
{
"id": "1508.06615"
},
{
"id": "1606.04474"
},
{
"id": "1608.05859"
},
{
"id": "1609.08144"
},
{
"id": "1606.01885"
},
{
"id": "1505.00387"
},
{
"id": "1609.07843"
},
{
"id": "1512.05287"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1611.01600 | 42 | # l
# "Je
]k â β) > 0. Then,
(wt) < e(wt®) â F |p wha (ll)
From Assumption A2, ¢ is bounded from below. Together with the fact that {¢(wâ)} is monoton- ically decreasing from (Ip, the sequence {¢(w')} converges, thus the sequence {F(w', a')} also converges.
# C PROOF OF PROPOSITION 3.2
Let the singulars values of W be λ1(W) ⥠λ2(W) ⥠· · · ⥠λm(W). 1 m
=n. =n
â
Thus, λ1(W) ⥠n.
11 | 1611.01600#42 | Loss-aware Binarization of Deep Networks | Deep neural network models, though very powerful and highly successful, are
computationally expensive in terms of space and time. Recently, there have been
a number of attempts on binarizing the network weights and activations. This
greatly reduces the network size, and replaces the underlying multiplications
to additions or even XNOR bit operations. However, existing binarization
schemes are based on simple matrix approximation and ignore the effect of
binarization on the loss. In this paper, we propose a proximal Newton algorithm
with diagonal Hessian approximation that directly minimizes the loss w.r.t. the
binarized weights. The underlying proximal step has an efficient closed-form
solution, and the second-order information can be efficiently obtained from the
second moments already computed by the Adam optimizer. Experiments on both
feedforward and recurrent networks show that the proposed loss-aware
binarization algorithm outperforms existing binarization schemes, and is also
more robust for wide and deep networks. | http://arxiv.org/pdf/1611.01600 | Lu Hou, Quanming Yao, James T. Kwok | cs.NE, cs.LG | null | null | cs.NE | 20161105 | 20180510 | [
{
"id": "1605.04711"
},
{
"id": "1606.06160"
},
{
"id": "1502.04390"
}
] |
1611.01603 | 42 | 9
Published as a conference paper at ICLR 2017
# REFERENCES
Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zit- nick, and Devi Parikh. Vqa: Visual question answering. In ICCV, 2015.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. ICLR, 2015.
Danqi Chen, Jason Bolton, and Christopher D. Manning. A thorough examination of the cnn/daily mail reading comprehension task. In ACL, 2016.
Yiming Cui, Zhipeng Chen, Si Wei, Shijin Wang, Ting Liu, and Guoping Hu. Attention-over- attention neural networks for reading comprehension. arXiv preprint arXiv:1607.04423, 2016.
Bhuwan Dhingra, Hanxiao Liu, William W Cohen, and Ruslan Salakhutdinov. Gated-attention readers for text comprehension. arXiv preprint arXiv:1606.01549, 2016. | 1611.01603#42 | Bidirectional Attention Flow for Machine Comprehension | Machine comprehension (MC), answering a query about a given context
paragraph, requires modeling complex interactions between the context and the
query. Recently, attention mechanisms have been successfully extended to MC.
Typically these methods use attention to focus on a small portion of the
context and summarize it with a fixed-size vector, couple attentions
temporally, and/or often form a uni-directional attention. In this paper we
introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage
hierarchical process that represents the context at different levels of
granularity and uses bi-directional attention flow mechanism to obtain a
query-aware context representation without early summarization. Our
experimental evaluations show that our model achieves the state-of-the-art
results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze
test. | http://arxiv.org/pdf/1611.01603 | Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi | cs.CL | Published as a conference paper at ICLR 2017 | null | cs.CL | 20161105 | 20180621 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1611.01604"
},
{
"id": "1609.05284"
},
{
"id": "1610.09996"
},
{
"id": "1606.01549"
},
{
"id": "1511.02274"
},
{
"id": "1505.00387"
},
{
"id": "1611.01436"
},
{
"id": "1611.01724"
},
{
"id": "1607.04423"
}
] |
1611.01626 | 42 | In the results we compare against both A3C and a variant of asynchronous deep Q-learning. The changes we made to Q-learning are to make it similar to our method, with some tuning of the hyper- parameters for performance. We use the exact same network, the exploration policy is a softmax over the Q-values with a temperature of 0.1, and the Q-values are parameterized as in equation (12) (i.e., similar to the dueling architecture (Wang et al., 2016)), where α = 0.1. The Q-value updates are performed every 4 steps with a minibatch of 32 (roughly 5 times more frequently than PGQL). For each method, all games used identical hyper-parameters.
The results across all games are given in table 3 in the appendix. All scores have been normal- ized by subtracting the average score achieved by an agent that takes actions uniformly at random.
9
Published as a conference paper at ICLR 2017 | 1611.01626#42 | Combining policy gradient and Q-learning | Policy gradient is an efficient technique for improving a policy in a
reinforcement learning setting. However, vanilla online variants are on-policy
only and not able to take advantage of off-policy data. In this paper we
describe a new technique that combines policy gradient with off-policy
Q-learning, drawing experience from a replay buffer. This is motivated by
making a connection between the fixed points of the regularized policy gradient
algorithm and the Q-values. This connection allows us to estimate the Q-values
from the action preferences of the policy, to which we apply Q-learning
updates. We refer to the new technique as 'PGQL', for policy gradient and
Q-learning. We also establish an equivalency between action-value fitting
techniques and actor-critic algorithms, showing that regularized policy
gradient techniques can be interpreted as advantage function learning
algorithms. We conclude with some numerical examples that demonstrate improved
data efficiency and stability of PGQL. In particular, we tested PGQL on the
full suite of Atari games and achieved performance exceeding that of both
asynchronous advantage actor-critic (A3C) and Q-learning. | http://arxiv.org/pdf/1611.01626 | Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih | cs.LG, cs.AI, math.OC, stat.ML | null | null | cs.LG | 20161105 | 20170407 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1609.00150"
},
{
"id": "1512.04105"
},
{
"id": "1511.05952"
},
{
"id": "1504.00702"
}
] |
1611.01673 | 42 | Published as a conference paper at ICLR 2017 ie TT se BERRA Hees ST tet tet tt He®tis«abiiass afe4S8880 ee Ps | ASABE ERE Bia aneaee ES aecihe GG. eae eet LF ite ed eR Dt tet bl ae. PRES SEES E*2LRe VAR RMGEEEA fhoavesete Basak eba det a de Pate | SR ZEaa2ER8 w EAM he 4g2a8 Real Images Generated Images Figure 15: Sample of pictures generated by GMAN-O on CIFAR dataset. A.4. SOMEWHAT RELATED WORK A GAN framework with two discriminators appeared in Yoo et al. (2016), however, it is applica- ble only in a semi-supervised case where a label can be assigned to subsets of the dataset (e.g., X = {X, = Domain 1, ¥% = Domain 2,...}). In contrast, our framework applies to an unsu- pervised scenario where an obvious partition of the dataset is unknown. Furthermore, extending GMAN to the semi-supervised domain-adaptation scenario would suggest multiple discriminators per domain, therefore our line of research is strictly orthogonal to | 1611.01673#42 | Generative Multi-Adversarial Networks | Generative adversarial networks (GANs) are a framework for producing a
generative model by way of a two-player minimax game. In this paper, we propose
the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that
extends GANs to multiple discriminators. In previous work, the successful
training of GANs requires modifying the minimax objective to accelerate
training early on. In contrast, GMAN can be reliably trained with the original,
untampered objective. We explore a number of design perspectives with the
discriminator role ranging from formidable adversary to forgiving teacher.
Image generation tasks comparing the proposed framework to standard GANs
demonstrate GMAN produces higher quality samples in a fraction of the
iterations when measured by a pairwise GAM-type metric. | http://arxiv.org/pdf/1611.01673 | Ishan Durugkar, Ian Gemp, Sridhar Mahadevan | cs.LG, cs.MA, cs.NE | Accepted as a conference paper (poster) at ICLR 2017 | null | cs.LG | 20161105 | 20170302 | [
{
"id": "1511.06390"
},
{
"id": "1511.05897"
},
{
"id": "1610.02920"
}
] |
1611.01578 | 43 | 11
# Under review as a conference paper at ICLR 2017
Geoffrey Hinton, Li Deng, Dong Yu, George E. Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N. Sainath, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 2012.
Sepp Hochreiter and Juergen Schmidhuber. Long short-term memory. Neural Computation, 1997.
Gao Huang, Zhuang Liu, and Kilian Q. Weinberger. Densely connected convolutional networks. arXiv preprint arXiv:1608.06993, 2016a.
Gao Huang, Zhuang Liu, Kilian Q. Weinberger, and Laurens van der Maaten. Densely connected convolutional networks. arXiv preprint arXiv:1608.06993, 2016b.
Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Weinberger. Deep networks with stochas- tic depth. arXiv preprint arXiv:1603.09382, 2016c. | 1611.01578#43 | Neural Architecture Search with Reinforcement Learning | Neural networks are powerful and flexible models that work well for many
difficult learning tasks in image, speech and natural language understanding.
Despite their success, neural networks are still hard to design. In this paper,
we use a recurrent network to generate the model descriptions of neural
networks and train this RNN with reinforcement learning to maximize the
expected accuracy of the generated architectures on a validation set. On the
CIFAR-10 dataset, our method, starting from scratch, can design a novel network
architecture that rivals the best human-invented architecture in terms of test
set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is
0.09 percent better and 1.05x faster than the previous state-of-the-art model
that used a similar architectural scheme. On the Penn Treebank dataset, our
model can compose a novel recurrent cell that outperforms the widely-used LSTM
cell, and other state-of-the-art baselines. Our cell achieves a test set
perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than
the previous state-of-the-art model. The cell can also be transferred to the
character language modeling task on PTB and achieves a state-of-the-art
perplexity of 1.214. | http://arxiv.org/pdf/1611.01578 | Barret Zoph, Quoc V. Le | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20161105 | 20170215 | [
{
"id": "1611.01462"
},
{
"id": "1607.03474"
},
{
"id": "1603.05027"
},
{
"id": "1609.09106"
},
{
"id": "1511.06732"
},
{
"id": "1508.06615"
},
{
"id": "1606.04474"
},
{
"id": "1608.05859"
},
{
"id": "1609.08144"
},
{
"id": "1606.01885"
},
{
"id": "1505.00387"
},
{
"id": "1609.07843"
},
{
"id": "1512.05287"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1611.01603 | 43 | Akira Fukui, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach. Multimodal compact bilinear pooling for visual question answering and visual grounding. In EMNLP, 2016.
Karl Moritz Hermann, Tom´as Kocisk´y, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. In NIPS, 2015.
Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. The goldilocks principle: Reading childrenâs books with explicit memory representations. In ICLR, 2016.
Sepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural Computation, 1997.
Rudolf Kadlec, Martin Schmid, Ondrej Bajgar, and Jan Kleindienst. Text understanding with the attention sum reader network. In ACL, 2016.
Yoon Kim. Convolutional neural networks for sentence classiï¬cation. In EMNLP, 2014.
Sosuke Kobayashi, Ran Tian, Naoaki Okazaki, and Kentaro Inui. Dynamic entity representation with max-pooling improves machine reading. In NAACL-HLT, 2016. | 1611.01603#43 | Bidirectional Attention Flow for Machine Comprehension | Machine comprehension (MC), answering a query about a given context
paragraph, requires modeling complex interactions between the context and the
query. Recently, attention mechanisms have been successfully extended to MC.
Typically these methods use attention to focus on a small portion of the
context and summarize it with a fixed-size vector, couple attentions
temporally, and/or often form a uni-directional attention. In this paper we
introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage
hierarchical process that represents the context at different levels of
granularity and uses bi-directional attention flow mechanism to obtain a
query-aware context representation without early summarization. Our
experimental evaluations show that our model achieves the state-of-the-art
results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze
test. | http://arxiv.org/pdf/1611.01603 | Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi | cs.CL | Published as a conference paper at ICLR 2017 | null | cs.CL | 20161105 | 20180621 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1611.01604"
},
{
"id": "1609.05284"
},
{
"id": "1610.09996"
},
{
"id": "1606.01549"
},
{
"id": "1511.02274"
},
{
"id": "1505.00387"
},
{
"id": "1611.01436"
},
{
"id": "1611.01724"
},
{
"id": "1607.04423"
}
] |
1611.01626 | 43 | 9
Published as a conference paper at ICLR 2017
Each game was tested 5 times per method with the same hyper-parameters but with different ran- dom seeds. The scores presented correspond to the best score obtained by any run from a random start evaluation condition (Mnih et al., 2016). Overall, PGQL performed best in 34 games, A3C performed best in 7 games, and Q-learning was best in 10 games. In 6 games two or more methods tied. In tables 1 and 2 we give the mean and median normalized scores as percentage of an expert human normalized score across all games for each tested algorithm from random and human-start conditions respectively. In a human-start condition the agent takes over control of the game from randomly selected human-play starting points, which generally leads to lower performance since the agent may not have found itself in that state during training. In both cases, PGQL has both the highest mean and median, and the median score exceeds 100%, the human performance threshold. | 1611.01626#43 | Combining policy gradient and Q-learning | Policy gradient is an efficient technique for improving a policy in a
reinforcement learning setting. However, vanilla online variants are on-policy
only and not able to take advantage of off-policy data. In this paper we
describe a new technique that combines policy gradient with off-policy
Q-learning, drawing experience from a replay buffer. This is motivated by
making a connection between the fixed points of the regularized policy gradient
algorithm and the Q-values. This connection allows us to estimate the Q-values
from the action preferences of the policy, to which we apply Q-learning
updates. We refer to the new technique as 'PGQL', for policy gradient and
Q-learning. We also establish an equivalency between action-value fitting
techniques and actor-critic algorithms, showing that regularized policy
gradient techniques can be interpreted as advantage function learning
algorithms. We conclude with some numerical examples that demonstrate improved
data efficiency and stability of PGQL. In particular, we tested PGQL on the
full suite of Atari games and achieved performance exceeding that of both
asynchronous advantage actor-critic (A3C) and Q-learning. | http://arxiv.org/pdf/1611.01626 | Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih | cs.LG, cs.AI, math.OC, stat.ML | null | null | cs.LG | 20161105 | 20170407 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1609.00150"
},
{
"id": "1512.04105"
},
{
"id": "1511.05952"
},
{
"id": "1504.00702"
}
] |
1611.01673 | 43 | extending GMAN to the semi-supervised domain-adaptation scenario would suggest multiple discriminators per domain, therefore our line of research is strictly orthogonal to that of their multi-domain dis- criminator approach. Also, note that assigning a discriminator to each domain is akin to prescribing anew discriminator to each value of a conditional variable in conditional GANs (Mirza & Osindero (2014)). In this case, we interpret GMAN as introducing multiple conditional discriminators and not a discriminator for each of the possibly exponentially many conditional labels. In Section 4.4, we describe an approach to customize adversarial training to better suit the devel- opment of the generator. An approach with similar conceptual underpinnings was described in Ravanbakhsh et al. (2016), however, similar to the above, it is only admissible in a semi-supervised scenario whereas our applies to the unsupervised case. A.5 Softmax REPRESENTABILITY Let softmax(V;) = Ve {miny,,maxy,]. Also let a = arg min; V;, b = arg max; V;, and V(t) = V((1 â t)Da + tDp) so that V(0) = | 1611.01673#43 | Generative Multi-Adversarial Networks | Generative adversarial networks (GANs) are a framework for producing a
generative model by way of a two-player minimax game. In this paper, we propose
the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that
extends GANs to multiple discriminators. In previous work, the successful
training of GANs requires modifying the minimax objective to accelerate
training early on. In contrast, GMAN can be reliably trained with the original,
untampered objective. We explore a number of design perspectives with the
discriminator role ranging from formidable adversary to forgiving teacher.
Image generation tasks comparing the proposed framework to standard GANs
demonstrate GMAN produces higher quality samples in a fraction of the
iterations when measured by a pairwise GAM-type metric. | http://arxiv.org/pdf/1611.01673 | Ishan Durugkar, Ian Gemp, Sridhar Mahadevan | cs.LG, cs.MA, cs.NE | Accepted as a conference paper (poster) at ICLR 2017 | null | cs.LG | 20161105 | 20170302 | [
{
"id": "1511.06390"
},
{
"id": "1511.05897"
},
{
"id": "1610.02920"
}
] |
1611.01578 | 44 | Hakan Inan, Khashayar Khosravi, and Richard Socher. Tying word vectors and word classiï¬ers: A loss framework for language modeling. arXiv preprint arXiv:1611.01462, 2016.
Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015.
Kevin Jarrett, Koray Kavukcuoglu, Yann Lecun, et al. What is the best multi-stage architecture for object recognition? In ICCV, 2009.
Rafal Jozefowicz, Wojciech Zaremba, and Ilya Sutskever. An empirical exploration of recurrent network architectures. In ICML, 2015.
Yoon Kim, Yacine Jernite, David Sontag, and Alexander M. Rush. Character-aware neural language models. arXiv preprint arXiv:1508.06615, 2015.
Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classiï¬cation with deep convo- lutional neural networks. In NIPS, 2012. | 1611.01578#44 | Neural Architecture Search with Reinforcement Learning | Neural networks are powerful and flexible models that work well for many
difficult learning tasks in image, speech and natural language understanding.
Despite their success, neural networks are still hard to design. In this paper,
we use a recurrent network to generate the model descriptions of neural
networks and train this RNN with reinforcement learning to maximize the
expected accuracy of the generated architectures on a validation set. On the
CIFAR-10 dataset, our method, starting from scratch, can design a novel network
architecture that rivals the best human-invented architecture in terms of test
set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is
0.09 percent better and 1.05x faster than the previous state-of-the-art model
that used a similar architectural scheme. On the Penn Treebank dataset, our
model can compose a novel recurrent cell that outperforms the widely-used LSTM
cell, and other state-of-the-art baselines. Our cell achieves a test set
perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than
the previous state-of-the-art model. The cell can also be transferred to the
character language modeling task on PTB and achieves a state-of-the-art
perplexity of 1.214. | http://arxiv.org/pdf/1611.01578 | Barret Zoph, Quoc V. Le | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20161105 | 20170215 | [
{
"id": "1611.01462"
},
{
"id": "1607.03474"
},
{
"id": "1603.05027"
},
{
"id": "1609.09106"
},
{
"id": "1511.06732"
},
{
"id": "1508.06615"
},
{
"id": "1606.04474"
},
{
"id": "1608.05859"
},
{
"id": "1609.08144"
},
{
"id": "1606.01885"
},
{
"id": "1505.00387"
},
{
"id": "1609.07843"
},
{
"id": "1512.05287"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1611.01603 | 44 | Kenton Lee, Tom Kwiatkowski, Ankur Parikh, and Dipanjan Das. Learning recurrent span repre- sentations for extractive question answering. arXiv preprint arXiv:1611.01436, 2016.
Jiasen Lu, Jianwei Yang, Dhruv Batra, and Devi Parikh. Hierarchical question-image co-attention for visual question answering. In NIPS, 2016.
Mateusz Malinowski, Marcus Rohrbach, and Mario Fritz. Ask your neurons: A neural-based ap- proach to answering questions about images. In ICCV, 2015.
Jeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for word representation. In EMNLP, 2014.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questions for machine comprehension of text. In EMNLP, 2016.
Matthew Richardson, Christopher JC Burges, and Erin Renshaw. Mctest: A challenge dataset for the open-domain machine comprehension of text. In EMNLP, 2013.
Yelong Shen, Po-Sen Huang, Jianfeng Gao, and Weizhu Chen. Reasonet: Learning to stop reading in machine comprehension. arXiv preprint arXiv:1609.05284, 2016. | 1611.01603#44 | Bidirectional Attention Flow for Machine Comprehension | Machine comprehension (MC), answering a query about a given context
paragraph, requires modeling complex interactions between the context and the
query. Recently, attention mechanisms have been successfully extended to MC.
Typically these methods use attention to focus on a small portion of the
context and summarize it with a fixed-size vector, couple attentions
temporally, and/or often form a uni-directional attention. In this paper we
introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage
hierarchical process that represents the context at different levels of
granularity and uses bi-directional attention flow mechanism to obtain a
query-aware context representation without early summarization. Our
experimental evaluations show that our model achieves the state-of-the-art
results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze
test. | http://arxiv.org/pdf/1611.01603 | Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi | cs.CL | Published as a conference paper at ICLR 2017 | null | cs.CL | 20161105 | 20180621 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1611.01604"
},
{
"id": "1609.05284"
},
{
"id": "1610.09996"
},
{
"id": "1606.01549"
},
{
"id": "1511.02274"
},
{
"id": "1505.00387"
},
{
"id": "1611.01436"
},
{
"id": "1611.01724"
},
{
"id": "1607.04423"
}
] |
1611.01626 | 44 | It is worth noting that PGQL was the worst performer in only one game, in cases where it was not the outright winner it was generally somewhere in between the performance of the other two algorithms. Figure 3 shows some sample traces of games where PGQL was the best performer. In these cases PGQL has far better data efï¬ciency than the other methods. In ï¬gure 4 we show some of the games where PGQL under-performed. In practically every case where PGQL did not perform well it had better data efï¬ciency early on in the learning, but performance saturated or collapsed. We hypothesize that in these cases the policy has reached a local optimum, or over-ï¬t to the early data, and might perform better were the hyper-parameters to be tuned.
Mean Median A3C Q-learning 636.8 107.3 756.3 58.9 PGQL 877.2 145.6
Table 1: Mean and median normalized scores for the Atari suite from random starts, as a percentage of human normalized score.
Mean Median A3C Q-learning 266.6 58.3 246.6 30.5 PGQL 416.7 103.3
Table 2: Mean and median normalized scores for the Atari suite from human starts, as a percentage of human normalized score. | 1611.01626#44 | Combining policy gradient and Q-learning | Policy gradient is an efficient technique for improving a policy in a
reinforcement learning setting. However, vanilla online variants are on-policy
only and not able to take advantage of off-policy data. In this paper we
describe a new technique that combines policy gradient with off-policy
Q-learning, drawing experience from a replay buffer. This is motivated by
making a connection between the fixed points of the regularized policy gradient
algorithm and the Q-values. This connection allows us to estimate the Q-values
from the action preferences of the policy, to which we apply Q-learning
updates. We refer to the new technique as 'PGQL', for policy gradient and
Q-learning. We also establish an equivalency between action-value fitting
techniques and actor-critic algorithms, showing that regularized policy
gradient techniques can be interpreted as advantage function learning
algorithms. We conclude with some numerical examples that demonstrate improved
data efficiency and stability of PGQL. In particular, we tested PGQL on the
full suite of Atari games and achieved performance exceeding that of both
asynchronous advantage actor-critic (A3C) and Q-learning. | http://arxiv.org/pdf/1611.01626 | Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih | cs.LG, cs.AI, math.OC, stat.ML | null | null | cs.LG | 20161105 | 20170407 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1609.00150"
},
{
"id": "1512.04105"
},
{
"id": "1511.05952"
},
{
"id": "1504.00702"
}
] |
1611.01673 | 44 | = arg min; V;, b = arg max; V;, and V(t) = V((1 â t)Da + tDp) so that V(0) = V, and V(1) = Vy. The softmax and minimax objective V (Dj, G) are both continuous in their inputs, so by the intermediate value theorem, we have that 3¢ ⬠[0,1] st. V(é) = V, which implies 3D ⬠D s.t. V(D,G) = V. This result implies that the softmax (and any other continuous substitute) can be interpreted as returning v(d, G) for some D selected by computing an another, unknown function over the space of the discriminators. This result holds even if D is not representable by the architecture chosen for Dâs neural network. 13 | 1611.01673#44 | Generative Multi-Adversarial Networks | Generative adversarial networks (GANs) are a framework for producing a
generative model by way of a two-player minimax game. In this paper, we propose
the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that
extends GANs to multiple discriminators. In previous work, the successful
training of GANs requires modifying the minimax objective to accelerate
training early on. In contrast, GMAN can be reliably trained with the original,
untampered objective. We explore a number of design perspectives with the
discriminator role ranging from formidable adversary to forgiving teacher.
Image generation tasks comparing the proposed framework to standard GANs
demonstrate GMAN produces higher quality samples in a fraction of the
iterations when measured by a pairwise GAM-type metric. | http://arxiv.org/pdf/1611.01673 | Ishan Durugkar, Ian Gemp, Sridhar Mahadevan | cs.LG, cs.MA, cs.NE | Accepted as a conference paper (poster) at ICLR 2017 | null | cs.LG | 20161105 | 20170302 | [
{
"id": "1511.06390"
},
{
"id": "1511.05897"
},
{
"id": "1610.02920"
}
] |
1611.01578 | 45 | Brenden M. Lake, Ruslan Salakhutdinov, and Joshua B. Tenenbaum. Human-level concept learning through probabilistic program induction. Science, 2015.
Gustav Larsson, Michael Maire, and Gregory Shakhnarovich. Fractalnet: Ultra-deep neural net- works without residuals. arXiv preprint arXiv:1605.07648, 2016.
Yann LeCun, L´eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998.
Chen-Yu Lee, Saining Xie, Patrick Gallagher, Zhengyou Zhang, and Zhuowen Tu. Deeply- supervised nets. In AISTATS, 2015.
Ke Li and Jitendra Malik. Learning to optimize. arXiv preprint arXiv:1606.01885, 2016.
Percy Liang, Michael I. Jordan, and Dan Klein. Learning programs: A hierarchical Bayesian ap- proach. In ICML, 2010.
Min Lin, Qiang Chen, and Shuicheng Yan. Network in network. In ICLR, 2013.
David G. Lowe. Object recognition from local scale-invariant features. In CVPR, 1999. | 1611.01578#45 | Neural Architecture Search with Reinforcement Learning | Neural networks are powerful and flexible models that work well for many
difficult learning tasks in image, speech and natural language understanding.
Despite their success, neural networks are still hard to design. In this paper,
we use a recurrent network to generate the model descriptions of neural
networks and train this RNN with reinforcement learning to maximize the
expected accuracy of the generated architectures on a validation set. On the
CIFAR-10 dataset, our method, starting from scratch, can design a novel network
architecture that rivals the best human-invented architecture in terms of test
set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is
0.09 percent better and 1.05x faster than the previous state-of-the-art model
that used a similar architectural scheme. On the Penn Treebank dataset, our
model can compose a novel recurrent cell that outperforms the widely-used LSTM
cell, and other state-of-the-art baselines. Our cell achieves a test set
perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than
the previous state-of-the-art model. The cell can also be transferred to the
character language modeling task on PTB and achieves a state-of-the-art
perplexity of 1.214. | http://arxiv.org/pdf/1611.01578 | Barret Zoph, Quoc V. Le | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20161105 | 20170215 | [
{
"id": "1611.01462"
},
{
"id": "1607.03474"
},
{
"id": "1603.05027"
},
{
"id": "1609.09106"
},
{
"id": "1511.06732"
},
{
"id": "1508.06615"
},
{
"id": "1606.04474"
},
{
"id": "1608.05859"
},
{
"id": "1609.08144"
},
{
"id": "1606.01885"
},
{
"id": "1505.00387"
},
{
"id": "1609.07843"
},
{
"id": "1512.05287"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1611.01603 | 45 | Alessandro Sordoni, Phillip Bachman, and Yoshua Bengio. Iterative alternating neural attention for machine reading. arXiv preprint arXiv:1606.02245, 2016.
Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overï¬tting. JMLR, 2014.
Rupesh Kumar Srivastava, Klaus Greff, and J¨urgen Schmidhuber. Highway networks. arXiv preprint arXiv:1505.00387, 2015.
10
Published as a conference paper at ICLR 2017
Adam Trischler, Zheng Ye, Xingdi Yuan, and Kaheer Suleman. Natural language comprehension with the epireader. In EMNLP, 2016.
Shuohang Wang and Jing Jiang. Machine comprehension using match-lstm and answer pointer. arXiv preprint arXiv:1608.07905, 2016.
Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. In ICLR, 2015.
Caiming Xiong, Stephen Merity, and Richard Socher. Dynamic memory networks for visual and textual question answering. In ICML, 2016a. | 1611.01603#45 | Bidirectional Attention Flow for Machine Comprehension | Machine comprehension (MC), answering a query about a given context
paragraph, requires modeling complex interactions between the context and the
query. Recently, attention mechanisms have been successfully extended to MC.
Typically these methods use attention to focus on a small portion of the
context and summarize it with a fixed-size vector, couple attentions
temporally, and/or often form a uni-directional attention. In this paper we
introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage
hierarchical process that represents the context at different levels of
granularity and uses bi-directional attention flow mechanism to obtain a
query-aware context representation without early summarization. Our
experimental evaluations show that our model achieves the state-of-the-art
results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze
test. | http://arxiv.org/pdf/1611.01603 | Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi | cs.CL | Published as a conference paper at ICLR 2017 | null | cs.CL | 20161105 | 20180621 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1611.01604"
},
{
"id": "1609.05284"
},
{
"id": "1610.09996"
},
{
"id": "1606.01549"
},
{
"id": "1511.02274"
},
{
"id": "1505.00387"
},
{
"id": "1611.01436"
},
{
"id": "1611.01724"
},
{
"id": "1607.04423"
}
] |
1611.01626 | 45 | Table 2: Mean and median normalized scores for the Atari suite from human starts, as a percentage of human normalized score.
12000 assault 16000 battle zone â asc â asc 10000 ââ Q-learning 14000 QJearning â PGQL 12000 ââ PGQL 8000 10000 6000 8000 4000 6000 4000 2000 2000 0 1 2 3 4 5 0 1 2 3 4 5 agent steps le7 agent steps le7 12000 chopper command Lovoo0 yars revenge â asc â asc 10000 ââ Q-learning â Qlearning 80000 PGQL â PGQL 8000 60000 6000 40000 4000 Pry 20000 oO oO oO 1 2 3 4 5 oO 1 2 3 4 5 agent steps 1e7 agent steps 1e7
Figure 3: Some Atari runs where PGQL performed well.
10
Published as a conference paper at ICLR 2017 | 1611.01626#45 | Combining policy gradient and Q-learning | Policy gradient is an efficient technique for improving a policy in a
reinforcement learning setting. However, vanilla online variants are on-policy
only and not able to take advantage of off-policy data. In this paper we
describe a new technique that combines policy gradient with off-policy
Q-learning, drawing experience from a replay buffer. This is motivated by
making a connection between the fixed points of the regularized policy gradient
algorithm and the Q-values. This connection allows us to estimate the Q-values
from the action preferences of the policy, to which we apply Q-learning
updates. We refer to the new technique as 'PGQL', for policy gradient and
Q-learning. We also establish an equivalency between action-value fitting
techniques and actor-critic algorithms, showing that regularized policy
gradient techniques can be interpreted as advantage function learning
algorithms. We conclude with some numerical examples that demonstrate improved
data efficiency and stability of PGQL. In particular, we tested PGQL on the
full suite of Atari games and achieved performance exceeding that of both
asynchronous advantage actor-critic (A3C) and Q-learning. | http://arxiv.org/pdf/1611.01626 | Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih | cs.LG, cs.AI, math.OC, stat.ML | null | null | cs.LG | 20161105 | 20170407 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1609.00150"
},
{
"id": "1512.04105"
},
{
"id": "1511.05952"
},
{
"id": "1504.00702"
}
] |
1611.01673 | 45 | Published as a conference paper at ICLR 2017 A.6 UNCONSTRAINED OPTIMIZATION To convert GMAN* minimax formulation to an unconstrained minimax formulation, we introduce an auxiliary variable, A, define \(A) = log(1 + e*), and let the generator minimize over A ⬠R. A.7 BOOSTING WITH AdaBoost.OL AdaBoost.OL (Beygelzimer et al. (2015)) does not require knowledge of the weak learnerâs slight edge over random guessing (P(correct label) = 0.5 + y ⬠(0,0.5]), and in fact, allows 7 < 0. This is crucial because our weak learners are deep nets with unknown, possibly negative, 7yâs. BIBI 22] 4] 4) ele BBaAAoo BBanAoo AMMO Figure 16: Sample of pictures generated across 4 independent runs on MNIST with F-boost (similar results with P-boost). ee Poe fo [| SEE fo: fo: fo: fo" A.8 EXPERIMENTAL SETUP All experiments were conducted using an architecture similar to DCGAN (Radford et al. (2015)). We use convolutional transpose layers (Zeiler et al. (2010)) for G | 1611.01673#45 | Generative Multi-Adversarial Networks | Generative adversarial networks (GANs) are a framework for producing a
generative model by way of a two-player minimax game. In this paper, we propose
the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that
extends GANs to multiple discriminators. In previous work, the successful
training of GANs requires modifying the minimax objective to accelerate
training early on. In contrast, GMAN can be reliably trained with the original,
untampered objective. We explore a number of design perspectives with the
discriminator role ranging from formidable adversary to forgiving teacher.
Image generation tasks comparing the proposed framework to standard GANs
demonstrate GMAN produces higher quality samples in a fraction of the
iterations when measured by a pairwise GAM-type metric. | http://arxiv.org/pdf/1611.01673 | Ishan Durugkar, Ian Gemp, Sridhar Mahadevan | cs.LG, cs.MA, cs.NE | Accepted as a conference paper (poster) at ICLR 2017 | null | cs.LG | 20161105 | 20170302 | [
{
"id": "1511.06390"
},
{
"id": "1511.05897"
},
{
"id": "1610.02920"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.