doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1611.01578 | 46 | David G. Lowe. Object recognition from local scale-invariant features. In CVPR, 1999.
Hector Mendoza, Aaron Klein, Matthias Feurer, Jost Tobias Springenberg, and Frank Hutter. To- wards automatically-tuned neural networks. In Proceedings of the 2016 Workshop on Automatic Machine Learning, pp. 58â65, 2016.
Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models. arXiv preprint arXiv:1609.07843, 2016.
Tomas Mikolov and Geoffrey Zweig. Context dependent recurrent neural network language model. In SLT, pp. 234â239, 2012.
12
# Under review as a conference paper at ICLR 2017
Andriy Mnih and Geoffrey Hinton. Three new graphical models for statistical language modelling. In ICML, 2007.
Vinod Nair and Geoffrey E. Hinton. Rectiï¬ed linear units improve restricted Boltzmann machines. In ICML, 2010.
Arvind Neelakantan, Quoc V. Le, and Ilya Sutskever. Neural programmer: Inducing latent programs with gradient descent. In ICLR, 2015. | 1611.01578#46 | Neural Architecture Search with Reinforcement Learning | Neural networks are powerful and flexible models that work well for many
difficult learning tasks in image, speech and natural language understanding.
Despite their success, neural networks are still hard to design. In this paper,
we use a recurrent network to generate the model descriptions of neural
networks and train this RNN with reinforcement learning to maximize the
expected accuracy of the generated architectures on a validation set. On the
CIFAR-10 dataset, our method, starting from scratch, can design a novel network
architecture that rivals the best human-invented architecture in terms of test
set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is
0.09 percent better and 1.05x faster than the previous state-of-the-art model
that used a similar architectural scheme. On the Penn Treebank dataset, our
model can compose a novel recurrent cell that outperforms the widely-used LSTM
cell, and other state-of-the-art baselines. Our cell achieves a test set
perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than
the previous state-of-the-art model. The cell can also be transferred to the
character language modeling task on PTB and achieves a state-of-the-art
perplexity of 1.214. | http://arxiv.org/pdf/1611.01578 | Barret Zoph, Quoc V. Le | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20161105 | 20170215 | [
{
"id": "1611.01462"
},
{
"id": "1607.03474"
},
{
"id": "1603.05027"
},
{
"id": "1609.09106"
},
{
"id": "1511.06732"
},
{
"id": "1508.06615"
},
{
"id": "1606.04474"
},
{
"id": "1608.05859"
},
{
"id": "1609.08144"
},
{
"id": "1606.01885"
},
{
"id": "1505.00387"
},
{
"id": "1609.07843"
},
{
"id": "1512.05287"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1611.01603 | 46 | Caiming Xiong, Stephen Merity, and Richard Socher. Dynamic memory networks for visual and textual question answering. In ICML, 2016a.
Caiming Xiong, Victor Zhong, and Richard Socher. Dynamic coattention networks for question answering. arXiv preprint arXiv:1611.01604, 2016b.
Huijuan Xu and Kate Saenko. Ask, attend and answer: Exploring question-guided spatial attention for visual question answering. In ECCV, 2016.
Zhilin Yang, Bhuwan Dhingra, Ye Yuan, Junjie Hu, William W Cohen, and Ruslan Salakhut- dinov. Words or characters? ï¬ne-grained gating for reading comprehension. arXiv preprint arXiv:1611.01724, 2016.
Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, and Alex Smola. Stacked attention networks for image question answering. arXiv preprint arXiv:1511.02274, 2015.
Yang Yu, Wei Zhang, Kazi Hasan, Mo Yu, Bing Xiang, and Bowen Zhou. End-to-end reading comprehension with dynamic answer chunk ranking. arXiv preprint arXiv:1610.09996, 2016. | 1611.01603#46 | Bidirectional Attention Flow for Machine Comprehension | Machine comprehension (MC), answering a query about a given context
paragraph, requires modeling complex interactions between the context and the
query. Recently, attention mechanisms have been successfully extended to MC.
Typically these methods use attention to focus on a small portion of the
context and summarize it with a fixed-size vector, couple attentions
temporally, and/or often form a uni-directional attention. In this paper we
introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage
hierarchical process that represents the context at different levels of
granularity and uses bi-directional attention flow mechanism to obtain a
query-aware context representation without early summarization. Our
experimental evaluations show that our model achieves the state-of-the-art
results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze
test. | http://arxiv.org/pdf/1611.01603 | Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi | cs.CL | Published as a conference paper at ICLR 2017 | null | cs.CL | 20161105 | 20180621 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1611.01604"
},
{
"id": "1609.05284"
},
{
"id": "1610.09996"
},
{
"id": "1606.01549"
},
{
"id": "1511.02274"
},
{
"id": "1505.00387"
},
{
"id": "1611.01436"
},
{
"id": "1611.01724"
},
{
"id": "1607.04423"
}
] |
1611.01626 | 46 | Figure 3: Some Atari runs where PGQL performed well.
10
Published as a conference paper at ICLR 2017
Py) breakout 35000 hero â A3c â A3c 700 | Q-learning 30000 â Qlearning 600 â PGaL 25000 500 2 20000 5 400 & 15000 300 200 10000 100 5000 0 1 2 3 4 5 agent steps le7 agent steps le7 25000 qbert 80000 up n down â_â age goo00 | â| Ase 20000 ââ Qlearning â @learning PGQL 60000 ââ PGQL 15000 50000 40000 10000 30000 20000 5000 10000 ° ° 0 1 2 3 4 5 0 1 2 3 4 5 agent steps le7 agent steps le7
Figure 4: Some Atari runs where PGQL performed poorly.
# 6 CONCLUSIONS | 1611.01626#46 | Combining policy gradient and Q-learning | Policy gradient is an efficient technique for improving a policy in a
reinforcement learning setting. However, vanilla online variants are on-policy
only and not able to take advantage of off-policy data. In this paper we
describe a new technique that combines policy gradient with off-policy
Q-learning, drawing experience from a replay buffer. This is motivated by
making a connection between the fixed points of the regularized policy gradient
algorithm and the Q-values. This connection allows us to estimate the Q-values
from the action preferences of the policy, to which we apply Q-learning
updates. We refer to the new technique as 'PGQL', for policy gradient and
Q-learning. We also establish an equivalency between action-value fitting
techniques and actor-critic algorithms, showing that regularized policy
gradient techniques can be interpreted as advantage function learning
algorithms. We conclude with some numerical examples that demonstrate improved
data efficiency and stability of PGQL. In particular, we tested PGQL on the
full suite of Atari games and achieved performance exceeding that of both
asynchronous advantage actor-critic (A3C) and Q-learning. | http://arxiv.org/pdf/1611.01626 | Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih | cs.LG, cs.AI, math.OC, stat.ML | null | null | cs.LG | 20161105 | 20170407 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1609.00150"
},
{
"id": "1512.04105"
},
{
"id": "1511.05952"
},
{
"id": "1504.00702"
}
] |
1611.01673 | 46 | conducted using an architecture similar to DCGAN (Radford et al. (2015)). We use convolutional transpose layers (Zeiler et al. (2010)) for G and strided convolutions for D except for the input of G' and the last layer of D. We use the single step gradient method as in (Nowozin et al. (2016)), and batch normalization (loffe & Szegedy (2015)) was used in each of the generator layers. The different discriminators were trained with varying dropout rates from (0.3, 0.7]. Variations in the discriminators were effected in two ways. We varied the architecture by varying the number of filters in the discriminator layers (reduced by factors of 2, 4 and so on), as well as varying dropout rates. Secondly we also decorrelated the samples that the disriminators were training on by splitting the minibatch across the discriminators. The code was written in Tensorflow (Abadi et al. (2016)) and run on Nvidia GTX 980 GPUs. Code to reproduce experiments and plots is at https://github.com/iDurugkar/GMAN. Specifics for the MNIST architecture and training are: ¢ Generator latent variables z | 1611.01673#46 | Generative Multi-Adversarial Networks | Generative adversarial networks (GANs) are a framework for producing a
generative model by way of a two-player minimax game. In this paper, we propose
the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that
extends GANs to multiple discriminators. In previous work, the successful
training of GANs requires modifying the minimax objective to accelerate
training early on. In contrast, GMAN can be reliably trained with the original,
untampered objective. We explore a number of design perspectives with the
discriminator role ranging from formidable adversary to forgiving teacher.
Image generation tasks comparing the proposed framework to standard GANs
demonstrate GMAN produces higher quality samples in a fraction of the
iterations when measured by a pairwise GAM-type metric. | http://arxiv.org/pdf/1611.01673 | Ishan Durugkar, Ian Gemp, Sridhar Mahadevan | cs.LG, cs.MA, cs.NE | Accepted as a conference paper (poster) at ICLR 2017 | null | cs.LG | 20161105 | 20170302 | [
{
"id": "1511.06390"
},
{
"id": "1511.05897"
},
{
"id": "1610.02920"
}
] |
1611.01578 | 47 | Arvind Neelakantan, Quoc V. Le, and Ilya Sutskever. Neural programmer: Inducing latent programs with gradient descent. In ICLR, 2015.
Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. How to construct deep recurrent neural networks. arXiv preprint arXiv:1312.6026, 2013.
Oï¬r Press and Lior Wolf. Using the output embedding to improve language models. arXiv preprint arXiv:1608.05859, 2016.
MarcâAurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. Sequence level train- ing with recurrent neural networks. arXiv preprint arXiv:1511.06732, 2015.
Scott Reed and Nando de Freitas. Neural programmer-interpreters. In ICLR, 2015.
Shreyas Saxena and Jakob Verbeek. Convolutional neural fabrics. In NIPS, 2016.
Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. Minimum risk training for neural machine translation. In ACL, 2016. | 1611.01578#47 | Neural Architecture Search with Reinforcement Learning | Neural networks are powerful and flexible models that work well for many
difficult learning tasks in image, speech and natural language understanding.
Despite their success, neural networks are still hard to design. In this paper,
we use a recurrent network to generate the model descriptions of neural
networks and train this RNN with reinforcement learning to maximize the
expected accuracy of the generated architectures on a validation set. On the
CIFAR-10 dataset, our method, starting from scratch, can design a novel network
architecture that rivals the best human-invented architecture in terms of test
set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is
0.09 percent better and 1.05x faster than the previous state-of-the-art model
that used a similar architectural scheme. On the Penn Treebank dataset, our
model can compose a novel recurrent cell that outperforms the widely-used LSTM
cell, and other state-of-the-art baselines. Our cell achieves a test set
perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than
the previous state-of-the-art model. The cell can also be transferred to the
character language modeling task on PTB and achieves a state-of-the-art
perplexity of 1.214. | http://arxiv.org/pdf/1611.01578 | Barret Zoph, Quoc V. Le | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20161105 | 20170215 | [
{
"id": "1611.01462"
},
{
"id": "1607.03474"
},
{
"id": "1603.05027"
},
{
"id": "1609.09106"
},
{
"id": "1511.06732"
},
{
"id": "1508.06615"
},
{
"id": "1606.04474"
},
{
"id": "1608.05859"
},
{
"id": "1609.08144"
},
{
"id": "1606.01885"
},
{
"id": "1505.00387"
},
{
"id": "1609.07843"
},
{
"id": "1512.05287"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1611.01626 | 47 | Figure 4: Some Atari runs where PGQL performed poorly.
# 6 CONCLUSIONS
We have made a connection between the ï¬xed point of regularized policy gradient techniques and the Q-values of the resulting policy. For small regularization (the usual case) we have shown that the Bellman residual of the induced Q-values must be small. This leads us to consider adding an auxiliary update to the policy gradient which is related to the Bellman residual evaluated on a transformation of the policy. This update can be performed off-policy, using stored experience. We call the resulting method âPGQLâ, for policy gradient and Q-learning. Empirically, we observe better data efï¬ciency and stability of PGQL when compared to actor-critic or Q-learning alone. We veriï¬ed the performance of PGQL on a suite of Atari games, where we parameterize the policy using a neural network, and achieved performance exceeding that of both A3C and Q-learning.
# 7 ACKNOWLEDGMENTS
We thank Joseph Modayil for many comments and suggestions on the paper, and Hubert Soyer for help with performance evaluation. We would also like to thank the anonymous reviewers for their constructive feedback.
11
Published as a conference paper at ICLR 2017
# REFERENCES | 1611.01626#47 | Combining policy gradient and Q-learning | Policy gradient is an efficient technique for improving a policy in a
reinforcement learning setting. However, vanilla online variants are on-policy
only and not able to take advantage of off-policy data. In this paper we
describe a new technique that combines policy gradient with off-policy
Q-learning, drawing experience from a replay buffer. This is motivated by
making a connection between the fixed points of the regularized policy gradient
algorithm and the Q-values. This connection allows us to estimate the Q-values
from the action preferences of the policy, to which we apply Q-learning
updates. We refer to the new technique as 'PGQL', for policy gradient and
Q-learning. We also establish an equivalency between action-value fitting
techniques and actor-critic algorithms, showing that regularized policy
gradient techniques can be interpreted as advantage function learning
algorithms. We conclude with some numerical examples that demonstrate improved
data efficiency and stability of PGQL. In particular, we tested PGQL on the
full suite of Atari games and achieved performance exceeding that of both
asynchronous advantage actor-critic (A3C) and Q-learning. | http://arxiv.org/pdf/1611.01626 | Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih | cs.LG, cs.AI, math.OC, stat.ML | null | null | cs.LG | 20161105 | 20170407 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1609.00150"
},
{
"id": "1512.04105"
},
{
"id": "1511.05952"
},
{
"id": "1504.00702"
}
] |
1611.01673 | 47 | and plots is at https://github.com/iDurugkar/GMAN. Specifics for the MNIST architecture and training are: ¢ Generator latent variables z ~ U/ (â1,1)'°° e Generator convolution transpose layers: (4, 4, 128) , (8, 8, 64) , (16, 16, 32) , (32, 32, 1) e Base Discriminator architecture: (32, 32, 1) , (16, 16, 32) , (8, 8, 64) , (4, 4, 128). e Variants have either convolution 3 (4,4,128) removed or all the filter sizes are divided by 2 or 4. That is, (32,32, 1), (16,16, 16) , (8,8, 32) ,(4,4,64) or (32, 32, 1) , (16, 16, 8) , (8, 8, 16) , (4, 4, 32). e ReLu activations for all the hidden units. Tanh activation at the output units of the generator. Sigmoid at the output of the Discriminator. e Training was performed with Adam (Kingma & Ba (2014)) (lr = 2 x 10-4, 6, = 0.5). e MNIST was trained | 1611.01673#47 | Generative Multi-Adversarial Networks | Generative adversarial networks (GANs) are a framework for producing a
generative model by way of a two-player minimax game. In this paper, we propose
the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that
extends GANs to multiple discriminators. In previous work, the successful
training of GANs requires modifying the minimax objective to accelerate
training early on. In contrast, GMAN can be reliably trained with the original,
untampered objective. We explore a number of design perspectives with the
discriminator role ranging from formidable adversary to forgiving teacher.
Image generation tasks comparing the proposed framework to standard GANs
demonstrate GMAN produces higher quality samples in a fraction of the
iterations when measured by a pairwise GAM-type metric. | http://arxiv.org/pdf/1611.01673 | Ishan Durugkar, Ian Gemp, Sridhar Mahadevan | cs.LG, cs.MA, cs.NE | Accepted as a conference paper (poster) at ICLR 2017 | null | cs.LG | 20161105 | 20170302 | [
{
"id": "1511.06390"
},
{
"id": "1511.05897"
},
{
"id": "1610.02920"
}
] |
1611.01578 | 48 | Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
Jasper Snoek, Hugo Larochelle, and Ryan P. Adams. Practical Bayesian optimization of machine learning algorithms. In NIPS, 2012.
Jasper Snoek, Oren Rippel, Kevin Swersky, Ryan Kiros, Nadathur Satish, Narayanan Sundaram, Mostofa Patwary, Mostofa Ali, Ryan P. Adams, et al. Scalable bayesian optimization using deep neural networks. In ICML, 2015.
Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806, 2014.
Rupesh Kumar Srivastava, Klaus Greff, and J¨urgen Schmidhuber. Highway networks. arXiv preprint arXiv:1505.00387, 2015.
Kenneth O. Stanley, David B. DâAmbrosio, and Jason Gauci. A hypercube-based encoding for evolving large-scale neural networks. Artiï¬cial Life, 2009. | 1611.01578#48 | Neural Architecture Search with Reinforcement Learning | Neural networks are powerful and flexible models that work well for many
difficult learning tasks in image, speech and natural language understanding.
Despite their success, neural networks are still hard to design. In this paper,
we use a recurrent network to generate the model descriptions of neural
networks and train this RNN with reinforcement learning to maximize the
expected accuracy of the generated architectures on a validation set. On the
CIFAR-10 dataset, our method, starting from scratch, can design a novel network
architecture that rivals the best human-invented architecture in terms of test
set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is
0.09 percent better and 1.05x faster than the previous state-of-the-art model
that used a similar architectural scheme. On the Penn Treebank dataset, our
model can compose a novel recurrent cell that outperforms the widely-used LSTM
cell, and other state-of-the-art baselines. Our cell achieves a test set
perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than
the previous state-of-the-art model. The cell can also be transferred to the
character language modeling task on PTB and achieves a state-of-the-art
perplexity of 1.214. | http://arxiv.org/pdf/1611.01578 | Barret Zoph, Quoc V. Le | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20161105 | 20170215 | [
{
"id": "1611.01462"
},
{
"id": "1607.03474"
},
{
"id": "1603.05027"
},
{
"id": "1609.09106"
},
{
"id": "1511.06732"
},
{
"id": "1508.06615"
},
{
"id": "1606.04474"
},
{
"id": "1608.05859"
},
{
"id": "1609.08144"
},
{
"id": "1606.01885"
},
{
"id": "1505.00387"
},
{
"id": "1609.07843"
},
{
"id": "1512.05287"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1611.01603 | 48 | Error type Imprecise answer boundaries Ratio (%) 50 Example Context: âThe Free Movement of Workers Regulation articles 1 to 7 set out the main provisions on equal treatment of workers.â Question: âWhich articles of the Free Movement of Workers Regulation set out the primary provisions on equal treatment of workers?â Prediction: â1 to 7â, Answer: âarticles 1 to 7â Syntactic complications and ambiguities 28 Context: âA piece of paper was later found on which Luther had written his last statement. â Question: âWhat was later discovered written by Luther?â Prediction: âA piece of paperâ, Answer: âhis last statementâ Paraphrase problems 14 Context: âGenerally, education in Australia follows the three- tier model which includes primary education (primary schools), followed by secondary education (secondary schools/high schools) and tertiary education (universities and/or TAFE colleges).â Question: âWhat is the ï¬rst model of education, in the Aus- tralian system?â Prediction: âthree-tierâ, Answer: âprimary educationâ External knowledge 4 Context: âOn June 4, 2014, the NFL announced that the practice of | 1611.01603#48 | Bidirectional Attention Flow for Machine Comprehension | Machine comprehension (MC), answering a query about a given context
paragraph, requires modeling complex interactions between the context and the
query. Recently, attention mechanisms have been successfully extended to MC.
Typically these methods use attention to focus on a small portion of the
context and summarize it with a fixed-size vector, couple attentions
temporally, and/or often form a uni-directional attention. In this paper we
introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage
hierarchical process that represents the context at different levels of
granularity and uses bi-directional attention flow mechanism to obtain a
query-aware context representation without early summarization. Our
experimental evaluations show that our model achieves the state-of-the-art
results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze
test. | http://arxiv.org/pdf/1611.01603 | Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi | cs.CL | Published as a conference paper at ICLR 2017 | null | cs.CL | 20161105 | 20180621 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1611.01604"
},
{
"id": "1609.05284"
},
{
"id": "1610.09996"
},
{
"id": "1606.01549"
},
{
"id": "1511.02274"
},
{
"id": "1505.00387"
},
{
"id": "1611.01436"
},
{
"id": "1611.01724"
},
{
"id": "1607.04423"
}
] |
1611.01626 | 48 | 11
Published as a conference paper at ICLR 2017
# REFERENCES
Shun-Ichi Amari. Natural gradient works efï¬ciently in learning. Neural computation, 10(2):251â 276, 1998.
Mohammad Gheshlaghi Azar, Vicenc¸ G´omez, and Hilbert J Kappen. Dynamic policy programming. Journal of Machine Learning Research, 13(Nov):3207â3245, 2012.
J Andrew Bagnell and Jeff Schneider. Covariant policy search. In IJCAI, 2003.
Leemon C Baird III. Advantage updating. Technical Report WL-TR-93-1146, Wright-Patterson Air Force Base Ohio: Wright Laboratory, 1993.
Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning envi- ronment: An evaluation platform for general agents. Journal of Artiï¬cial Intelligence Research, 2012.
# Richard Bellman. Dynamic programming. Princeton University Press, 1957.
Dimitri P Bertsekas. Dynamic programming and optimal control, volume 1. Athena Scientiï¬c, 2005.
Dimitri P. Bertsekas and John N. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientiï¬c, 1996. | 1611.01626#48 | Combining policy gradient and Q-learning | Policy gradient is an efficient technique for improving a policy in a
reinforcement learning setting. However, vanilla online variants are on-policy
only and not able to take advantage of off-policy data. In this paper we
describe a new technique that combines policy gradient with off-policy
Q-learning, drawing experience from a replay buffer. This is motivated by
making a connection between the fixed points of the regularized policy gradient
algorithm and the Q-values. This connection allows us to estimate the Q-values
from the action preferences of the policy, to which we apply Q-learning
updates. We refer to the new technique as 'PGQL', for policy gradient and
Q-learning. We also establish an equivalency between action-value fitting
techniques and actor-critic algorithms, showing that regularized policy
gradient techniques can be interpreted as advantage function learning
algorithms. We conclude with some numerical examples that demonstrate improved
data efficiency and stability of PGQL. In particular, we tested PGQL on the
full suite of Atari games and achieved performance exceeding that of both
asynchronous advantage actor-critic (A3C) and Q-learning. | http://arxiv.org/pdf/1611.01626 | Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih | cs.LG, cs.AI, math.OC, stat.ML | null | null | cs.LG | 20161105 | 20170407 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1609.00150"
},
{
"id": "1512.04105"
},
{
"id": "1511.05952"
},
{
"id": "1504.00702"
}
] |
1611.01578 | 49 | Phillip D. Summers. A methodology for LISP program construction from examples. Journal of the ACM, 1977.
Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initializa- tion and momentum in deep learning. In ICML, 2013.
Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural networks. In NIPS, 2014.
Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Du- mitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In CVPR, 2015.
Sebastian Thrun and Lorien Pratt. Learning to learn. Springer Science & Business Media, 2012.
Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks. In NIPS, 2015.
Daan Wierstra, Faustino J Gomez, and J¨urgen Schmidhuber. Modeling systems with internal state using evolino. In GECCO, 2005.
Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. In Machine Learning, 1992.
13
# Under review as a conference paper at ICLR 2017 | 1611.01578#49 | Neural Architecture Search with Reinforcement Learning | Neural networks are powerful and flexible models that work well for many
difficult learning tasks in image, speech and natural language understanding.
Despite their success, neural networks are still hard to design. In this paper,
we use a recurrent network to generate the model descriptions of neural
networks and train this RNN with reinforcement learning to maximize the
expected accuracy of the generated architectures on a validation set. On the
CIFAR-10 dataset, our method, starting from scratch, can design a novel network
architecture that rivals the best human-invented architecture in terms of test
set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is
0.09 percent better and 1.05x faster than the previous state-of-the-art model
that used a similar architectural scheme. On the Penn Treebank dataset, our
model can compose a novel recurrent cell that outperforms the widely-used LSTM
cell, and other state-of-the-art baselines. Our cell achieves a test set
perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than
the previous state-of-the-art model. The cell can also be transferred to the
character language modeling task on PTB and achieves a state-of-the-art
perplexity of 1.214. | http://arxiv.org/pdf/1611.01578 | Barret Zoph, Quoc V. Le | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20161105 | 20170215 | [
{
"id": "1611.01462"
},
{
"id": "1607.03474"
},
{
"id": "1603.05027"
},
{
"id": "1609.09106"
},
{
"id": "1511.06732"
},
{
"id": "1508.06615"
},
{
"id": "1606.04474"
},
{
"id": "1608.05859"
},
{
"id": "1609.08144"
},
{
"id": "1606.01885"
},
{
"id": "1505.00387"
},
{
"id": "1609.07843"
},
{
"id": "1512.05287"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1611.01603 | 49 | Prediction: âthree-tierâ, Answer: âprimary educationâ External knowledge 4 Context: âOn June 4, 2014, the NFL announced that the practice of branding Super Bowl games with Roman numerals, a practice established at Super Bowl V, would be temporarily suspended, and that the game would be named using Arabic numerals as Super Bowl 50 as opposed to Super Bowl L.â Question: âIf Roman numerals were used in the naming of the 50th Super Bowl, which one would have been used?â Prediction: âSuper Bowl 50â, Answer: âLâ Multi- sentence 2 Context: âOver the next several years in addition to host to host interactive connections the network was enhanced to support terminal to host connections, host to host batch connections (remote job submission, remote printing, batch ï¬le transfer), interactive ï¬le transfer, gateways to the Tymnet and Telenet public data networks, X.25 host attachments, gateways to X.25 data networks, Ethernet attached hosts, and eventually TCP/IP and additional public universities in Michigan join the network. All of this set the stage for Meritâs role in the NSFNET project starting | 1611.01603#49 | Bidirectional Attention Flow for Machine Comprehension | Machine comprehension (MC), answering a query about a given context
paragraph, requires modeling complex interactions between the context and the
query. Recently, attention mechanisms have been successfully extended to MC.
Typically these methods use attention to focus on a small portion of the
context and summarize it with a fixed-size vector, couple attentions
temporally, and/or often form a uni-directional attention. In this paper we
introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage
hierarchical process that represents the context at different levels of
granularity and uses bi-directional attention flow mechanism to obtain a
query-aware context representation without early summarization. Our
experimental evaluations show that our model achieves the state-of-the-art
results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze
test. | http://arxiv.org/pdf/1611.01603 | Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi | cs.CL | Published as a conference paper at ICLR 2017 | null | cs.CL | 20161105 | 20180621 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1611.01604"
},
{
"id": "1609.05284"
},
{
"id": "1610.09996"
},
{
"id": "1606.01549"
},
{
"id": "1511.02274"
},
{
"id": "1505.00387"
},
{
"id": "1611.01436"
},
{
"id": "1611.01724"
},
{
"id": "1607.04423"
}
] |
1611.01626 | 49 | Dimitri P. Bertsekas and John N. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientiï¬c, 1996.
Thomas Degris, Martha White, and Richard S Sutton. Off-policy actor-critic. 2012.
Roy Fox, Ari Pakman, and Naftali Tishby. Taming the noise in reinforcement learning via soft updates. arXiv preprint arXiv:1207.4708, 2015.
Matthew Hausknecht and Peter Stone. On-policy vs. off-policy updates for deep reinforcement learning. Deep Reinforcement Learning: Frontiers and Challenges, IJCAI 2016 Workshop, 2016.
Nicolas Heess, David Silver, and Yee Whye Teh. Actor-critic reinforcement learning with energy- based policies. In JMLR: Workshop and Conference Proceedings 24, pp. 43â57, 2012.
Sham Kakade. A natural policy gradient. In Advances in Neural Information Processing Systems, volume 14, pp. 1531â1538, 2001.
Vijay R Konda and John N Tsitsiklis. On actor-critic algorithms. SIAM Journal on Control and Optimization, 42(4):1143â1166, 2003.
Lucas Lehnert and Doina Precup. Policy gradient methods for off-policy control. arXiv preprint arXiv:1512.04105, 2015. | 1611.01626#49 | Combining policy gradient and Q-learning | Policy gradient is an efficient technique for improving a policy in a
reinforcement learning setting. However, vanilla online variants are on-policy
only and not able to take advantage of off-policy data. In this paper we
describe a new technique that combines policy gradient with off-policy
Q-learning, drawing experience from a replay buffer. This is motivated by
making a connection between the fixed points of the regularized policy gradient
algorithm and the Q-values. This connection allows us to estimate the Q-values
from the action preferences of the policy, to which we apply Q-learning
updates. We refer to the new technique as 'PGQL', for policy gradient and
Q-learning. We also establish an equivalency between action-value fitting
techniques and actor-critic algorithms, showing that regularized policy
gradient techniques can be interpreted as advantage function learning
algorithms. We conclude with some numerical examples that demonstrate improved
data efficiency and stability of PGQL. In particular, we tested PGQL on the
full suite of Atari games and achieved performance exceeding that of both
asynchronous advantage actor-critic (A3C) and Q-learning. | http://arxiv.org/pdf/1611.01626 | Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih | cs.LG, cs.AI, math.OC, stat.ML | null | null | cs.LG | 20161105 | 20170407 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1609.00150"
},
{
"id": "1512.04105"
},
{
"id": "1511.05952"
},
{
"id": "1504.00702"
}
] |
1611.01578 | 50 | Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. In Machine Learning, 1992.
13
# Under review as a conference paper at ICLR 2017
Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, et al. Googleâs neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016.
Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. In BMVC, 2016.
Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. Recurrent neural network regularization. arXiv preprint arXiv:1409.2329, 2014.
Julian Georg Zilly, Rupesh Kumar Srivastava, Jan Koutn´ık, and J¨urgen Schmidhuber. Recurrent highway networks. arXiv preprint arXiv:1607.03474, 2016.
14
# Under review as a conference paper at ICLR 2017
# A APPENDIX | 1611.01578#50 | Neural Architecture Search with Reinforcement Learning | Neural networks are powerful and flexible models that work well for many
difficult learning tasks in image, speech and natural language understanding.
Despite their success, neural networks are still hard to design. In this paper,
we use a recurrent network to generate the model descriptions of neural
networks and train this RNN with reinforcement learning to maximize the
expected accuracy of the generated architectures on a validation set. On the
CIFAR-10 dataset, our method, starting from scratch, can design a novel network
architecture that rivals the best human-invented architecture in terms of test
set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is
0.09 percent better and 1.05x faster than the previous state-of-the-art model
that used a similar architectural scheme. On the Penn Treebank dataset, our
model can compose a novel recurrent cell that outperforms the widely-used LSTM
cell, and other state-of-the-art baselines. Our cell achieves a test set
perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than
the previous state-of-the-art model. The cell can also be transferred to the
character language modeling task on PTB and achieves a state-of-the-art
perplexity of 1.214. | http://arxiv.org/pdf/1611.01578 | Barret Zoph, Quoc V. Le | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20161105 | 20170215 | [
{
"id": "1611.01462"
},
{
"id": "1607.03474"
},
{
"id": "1603.05027"
},
{
"id": "1609.09106"
},
{
"id": "1511.06732"
},
{
"id": "1508.06615"
},
{
"id": "1606.04474"
},
{
"id": "1608.05859"
},
{
"id": "1609.08144"
},
{
"id": "1606.01885"
},
{
"id": "1505.00387"
},
{
"id": "1609.07843"
},
{
"id": "1512.05287"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1611.01603 | 50 | and eventually TCP/IP and additional public universities in Michigan join the network. All of this set the stage for Meritâs role in the NSFNET project starting in the mid-1980s.â Question: âWhat set the stage for Merits role in NSFNETâ Prediction: âAll of this set the stage for Merit âs role in the NSFNET project starting in the mid-1980sâ, Answer: âEthernet attached hosts, and eventually TCP/IP and additional public universities in Michigan join the networkâ | 1611.01603#50 | Bidirectional Attention Flow for Machine Comprehension | Machine comprehension (MC), answering a query about a given context
paragraph, requires modeling complex interactions between the context and the
query. Recently, attention mechanisms have been successfully extended to MC.
Typically these methods use attention to focus on a small portion of the
context and summarize it with a fixed-size vector, couple attentions
temporally, and/or often form a uni-directional attention. In this paper we
introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage
hierarchical process that represents the context at different levels of
granularity and uses bi-directional attention flow mechanism to obtain a
query-aware context representation without early summarization. Our
experimental evaluations show that our model achieves the state-of-the-art
results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze
test. | http://arxiv.org/pdf/1611.01603 | Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi | cs.CL | Published as a conference paper at ICLR 2017 | null | cs.CL | 20161105 | 20180621 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1611.01604"
},
{
"id": "1609.05284"
},
{
"id": "1610.09996"
},
{
"id": "1606.01549"
},
{
"id": "1511.02274"
},
{
"id": "1505.00387"
},
{
"id": "1611.01436"
},
{
"id": "1611.01724"
},
{
"id": "1607.04423"
}
] |
1611.01626 | 50 | Lucas Lehnert and Doina Precup. Policy gradient methods for off-policy control. arXiv preprint arXiv:1512.04105, 2015.
Sergey Levine and Vladlen Koltun. Guided policy search. In Proceedings of the 30th International Conference on Machine Learning (ICML), pp. 1â9, 2013.
Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuo- motor policies. arXiv preprint arXiv:1504.00702, 2015.
Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015.
Long-Ji Lin. Reinforcement learning for robots using neural networks. Technical report, DTIC Document, 1993.
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wier- stra, and Martin Riedmiller. Playing atari with deep reinforcement learning. In NIPS Deep Learn- ing Workshop. 2013. | 1611.01626#50 | Combining policy gradient and Q-learning | Policy gradient is an efficient technique for improving a policy in a
reinforcement learning setting. However, vanilla online variants are on-policy
only and not able to take advantage of off-policy data. In this paper we
describe a new technique that combines policy gradient with off-policy
Q-learning, drawing experience from a replay buffer. This is motivated by
making a connection between the fixed points of the regularized policy gradient
algorithm and the Q-values. This connection allows us to estimate the Q-values
from the action preferences of the policy, to which we apply Q-learning
updates. We refer to the new technique as 'PGQL', for policy gradient and
Q-learning. We also establish an equivalency between action-value fitting
techniques and actor-critic algorithms, showing that regularized policy
gradient techniques can be interpreted as advantage function learning
algorithms. We conclude with some numerical examples that demonstrate improved
data efficiency and stability of PGQL. In particular, we tested PGQL on the
full suite of Atari games and achieved performance exceeding that of both
asynchronous advantage actor-critic (A3C) and Q-learning. | http://arxiv.org/pdf/1611.01626 | Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih | cs.LG, cs.AI, math.OC, stat.ML | null | null | cs.LG | 20161105 | 20170407 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1609.00150"
},
{
"id": "1512.04105"
},
{
"id": "1511.05952"
},
{
"id": "1504.00702"
}
] |
1611.01578 | 51 | 14
# Under review as a conference paper at ICLR 2017
# A APPENDIX
Softmax FH: 7 FW: 5 N: 48 FH: 7 FW: 5 N: 48 FH: 7 FW: 5 N: 48 FH: 7 FW: 7 N: 48 FH: 5 FW: 7 N: 36 FH: 7 FW: 7 N: 36, FH: 7 FW: 1 N: 36 FH: 7 FW: 3 N: 36 FH: 7 FW: 7 N: 48 FH: 7 FW: 7 N: 48 FH: 3 FW: 7 N: 48 FH: 5 FW: 5 N: 36 FH: 3 FW: 3 N: 36 FH: 3 FW: 3 N: 48 FH: 3 FW: 3 N: 36 Image
Figure 7: Convolutional architecture discovered by our method, when the search space does not have strides or pooling layers. FH is ï¬lter height, FW is ï¬lter width and N is number of ï¬lters. Note that the skip connections are not residual connections. If one layer has many input layers then all input layers are concatenated in the depth dimension.
15
# Under review as a conference paper at ICLR 2017 | 1611.01578#51 | Neural Architecture Search with Reinforcement Learning | Neural networks are powerful and flexible models that work well for many
difficult learning tasks in image, speech and natural language understanding.
Despite their success, neural networks are still hard to design. In this paper,
we use a recurrent network to generate the model descriptions of neural
networks and train this RNN with reinforcement learning to maximize the
expected accuracy of the generated architectures on a validation set. On the
CIFAR-10 dataset, our method, starting from scratch, can design a novel network
architecture that rivals the best human-invented architecture in terms of test
set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is
0.09 percent better and 1.05x faster than the previous state-of-the-art model
that used a similar architectural scheme. On the Penn Treebank dataset, our
model can compose a novel recurrent cell that outperforms the widely-used LSTM
cell, and other state-of-the-art baselines. Our cell achieves a test set
perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than
the previous state-of-the-art model. The cell can also be transferred to the
character language modeling task on PTB and achieves a state-of-the-art
perplexity of 1.214. | http://arxiv.org/pdf/1611.01578 | Barret Zoph, Quoc V. Le | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20161105 | 20170215 | [
{
"id": "1611.01462"
},
{
"id": "1607.03474"
},
{
"id": "1603.05027"
},
{
"id": "1609.09106"
},
{
"id": "1511.06732"
},
{
"id": "1508.06615"
},
{
"id": "1606.04474"
},
{
"id": "1608.05859"
},
{
"id": "1609.08144"
},
{
"id": "1606.01885"
},
{
"id": "1505.00387"
},
{
"id": "1609.07843"
},
{
"id": "1512.05287"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1611.01603 | 51 | Incorrect preprocessing 2 Context: âEnglish chemist John Mayow (1641-1679) reï¬ned this work by showing that ï¬re requires only a part of air that he called spiritus nitroaereus or just nitroaereus.â Question: âJohn Mayow died in what year?â Prediction: â1641-1679â, Answer: â1679â
Table 4: Error analysis on SQuAD. We randomly selected EM-incorrect answers and classiï¬ed them into 6 different categories. Only relevant sentence(s) from the context shown for brevity.
12
Published as a conference paper at ICLR 2017
# B VARIATIONS OF SIMILARITY AND FUSION FUNCTIONS
Eqn. 1: dot product Eqn. 1: linear Eqn. 1: bilinear Eqn. 1: linear after MLP Eqn. 2: MLP after concat BIDAF (single) EM F1 75.5 65.5 69.7 59.5 71.8 61.6 76.4 66.2 77.0 67.1 77.3 68.0 | 1611.01603#51 | Bidirectional Attention Flow for Machine Comprehension | Machine comprehension (MC), answering a query about a given context
paragraph, requires modeling complex interactions between the context and the
query. Recently, attention mechanisms have been successfully extended to MC.
Typically these methods use attention to focus on a small portion of the
context and summarize it with a fixed-size vector, couple attentions
temporally, and/or often form a uni-directional attention. In this paper we
introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage
hierarchical process that represents the context at different levels of
granularity and uses bi-directional attention flow mechanism to obtain a
query-aware context representation without early summarization. Our
experimental evaluations show that our model achieves the state-of-the-art
results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze
test. | http://arxiv.org/pdf/1611.01603 | Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi | cs.CL | Published as a conference paper at ICLR 2017 | null | cs.CL | 20161105 | 20180621 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1611.01604"
},
{
"id": "1609.05284"
},
{
"id": "1610.09996"
},
{
"id": "1606.01549"
},
{
"id": "1511.02274"
},
{
"id": "1505.00387"
},
{
"id": "1611.01436"
},
{
"id": "1611.01724"
},
{
"id": "1607.04423"
}
] |
1611.01626 | 51 | Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Pe- tersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement learning. Nature, 518(7540):529â533, 02 2015. URL http://dx.doi.org/10.1038/ nature14236.
12
Published as a conference paper at ICLR 2017
Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy P Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. arXiv preprint arXiv:1602.01783, 2016.
Mohammad Norouzi, Samy Bengio, Zhifeng Chen, Navdeep Jaitly, Mike Schuster, Yonghui Wu, and Dale Schuurmans. Reward augmented maximum likelihood for neural structured prediction. arXiv preprint arXiv:1609.00150, 2016. | 1611.01626#51 | Combining policy gradient and Q-learning | Policy gradient is an efficient technique for improving a policy in a
reinforcement learning setting. However, vanilla online variants are on-policy
only and not able to take advantage of off-policy data. In this paper we
describe a new technique that combines policy gradient with off-policy
Q-learning, drawing experience from a replay buffer. This is motivated by
making a connection between the fixed points of the regularized policy gradient
algorithm and the Q-values. This connection allows us to estimate the Q-values
from the action preferences of the policy, to which we apply Q-learning
updates. We refer to the new technique as 'PGQL', for policy gradient and
Q-learning. We also establish an equivalency between action-value fitting
techniques and actor-critic algorithms, showing that regularized policy
gradient techniques can be interpreted as advantage function learning
algorithms. We conclude with some numerical examples that demonstrate improved
data efficiency and stability of PGQL. In particular, we tested PGQL on the
full suite of Atari games and achieved performance exceeding that of both
asynchronous advantage actor-critic (A3C) and Q-learning. | http://arxiv.org/pdf/1611.01626 | Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih | cs.LG, cs.AI, math.OC, stat.ML | null | null | cs.LG | 20161105 | 20170407 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1609.00150"
},
{
"id": "1512.04105"
},
{
"id": "1511.05952"
},
{
"id": "1504.00702"
}
] |
1611.01578 | 52 | 15
# Under review as a conference paper at ICLR 2017
elem_mult . elem_mult identity add elem_mult tanh add sigmoid sigmoid tanh elem_mult elem_mult sigmoid identity add identity
Figure 8: A comparison of the original LSTM cell vs. two good cells our model found. Top left: LSTM cell. Top right: Cell found by our model when the search space does not include max and sin. Bottom: Cell found by our model when the search space includes max and sin (the controller did not choose to use the sin function).
16 | 1611.01578#52 | Neural Architecture Search with Reinforcement Learning | Neural networks are powerful and flexible models that work well for many
difficult learning tasks in image, speech and natural language understanding.
Despite their success, neural networks are still hard to design. In this paper,
we use a recurrent network to generate the model descriptions of neural
networks and train this RNN with reinforcement learning to maximize the
expected accuracy of the generated architectures on a validation set. On the
CIFAR-10 dataset, our method, starting from scratch, can design a novel network
architecture that rivals the best human-invented architecture in terms of test
set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is
0.09 percent better and 1.05x faster than the previous state-of-the-art model
that used a similar architectural scheme. On the Penn Treebank dataset, our
model can compose a novel recurrent cell that outperforms the widely-used LSTM
cell, and other state-of-the-art baselines. Our cell achieves a test set
perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than
the previous state-of-the-art model. The cell can also be transferred to the
character language modeling task on PTB and achieves a state-of-the-art
perplexity of 1.214. | http://arxiv.org/pdf/1611.01578 | Barret Zoph, Quoc V. Le | cs.LG, cs.AI, cs.NE | null | null | cs.LG | 20161105 | 20170215 | [
{
"id": "1611.01462"
},
{
"id": "1607.03474"
},
{
"id": "1603.05027"
},
{
"id": "1609.09106"
},
{
"id": "1511.06732"
},
{
"id": "1508.06615"
},
{
"id": "1606.04474"
},
{
"id": "1608.05859"
},
{
"id": "1609.08144"
},
{
"id": "1606.01885"
},
{
"id": "1505.00387"
},
{
"id": "1609.07843"
},
{
"id": "1512.05287"
},
{
"id": "1603.09382"
},
{
"id": "1608.06993"
},
{
"id": "1605.07648"
}
] |
1611.01603 | 52 | Table 5: Variations of similarity function α (Equation 1) and fusion function β (Equation 2) and their performance on the dev data of SQuAD. See Appendix B for the details of each variation.
In this appendix section, we experimentally demonstrate how different choices of the similarity function α (Equation 1) and the fusion function β (Equation 2) impact the performance of our model. Each variation is deï¬ned as following:
Eqn. 1: dot product. Dot product α is deï¬ned as
a(h,u) =h'u (6)
where T indicates matrix transpose. Dot product has been used for the measurement of similarity between two vectors by {Hill et al.|(2016).
Eqn. 1: linear. Linear α is deï¬ned as
(7) lin â R4d is a trainable weight matrix. This can be considered as the simpliï¬cation of
where wi ⬠R*4 is a trainable weight matrix. This can be considered as the simplification of Equation|1|by dropping the term h o u in the concatenation.
Eqn. 1: bilinear. Bilinear α is deï¬ned as | 1611.01603#52 | Bidirectional Attention Flow for Machine Comprehension | Machine comprehension (MC), answering a query about a given context
paragraph, requires modeling complex interactions between the context and the
query. Recently, attention mechanisms have been successfully extended to MC.
Typically these methods use attention to focus on a small portion of the
context and summarize it with a fixed-size vector, couple attentions
temporally, and/or often form a uni-directional attention. In this paper we
introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage
hierarchical process that represents the context at different levels of
granularity and uses bi-directional attention flow mechanism to obtain a
query-aware context representation without early summarization. Our
experimental evaluations show that our model achieves the state-of-the-art
results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze
test. | http://arxiv.org/pdf/1611.01603 | Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi | cs.CL | Published as a conference paper at ICLR 2017 | null | cs.CL | 20161105 | 20180621 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1611.01604"
},
{
"id": "1609.05284"
},
{
"id": "1610.09996"
},
{
"id": "1606.01549"
},
{
"id": "1511.02274"
},
{
"id": "1505.00387"
},
{
"id": "1611.01436"
},
{
"id": "1611.01724"
},
{
"id": "1607.04423"
}
] |
1611.01626 | 52 | Razvan Pascanu and Yoshua Bengio. Revisiting natural gradient for deep networks. arXiv preprint arXiv:1301.3584, 2013.
Edwin Pednault, Naoki Abe, and Bianca Zadrozny. Sequential cost-sensitive decision making with reinforcement learning. In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 259â268. ACM, 2002.
Jing Peng and Ronald J Williams. Incremental multi-step Q-learning. Machine Learning, 22(1-3): 283â290, 1996.
Jan Peters, Katharina M¨ulling, and Yasemin Altun. Relative entropy policy search. In AAAI. Atlanta, 2010.
Martin Riedmiller. Neural ï¬tted Q iterationâï¬rst experiences with a data efï¬cient neural reinforce- ment learning method. In Machine Learning: ECML 2005, pp. 317â328. Springer Berlin Heidel- berg, 2005.
Gavin A Rummery and Mahesan Niranjan. On-line Q-learning using connectionist systems. 1994.
Brian Sallans and Geoffrey E Hinton. Reinforcement learning with factored states and actions. Journal of Machine Learning Research, 5(Aug):1063â1088, 2004. | 1611.01626#52 | Combining policy gradient and Q-learning | Policy gradient is an efficient technique for improving a policy in a
reinforcement learning setting. However, vanilla online variants are on-policy
only and not able to take advantage of off-policy data. In this paper we
describe a new technique that combines policy gradient with off-policy
Q-learning, drawing experience from a replay buffer. This is motivated by
making a connection between the fixed points of the regularized policy gradient
algorithm and the Q-values. This connection allows us to estimate the Q-values
from the action preferences of the policy, to which we apply Q-learning
updates. We refer to the new technique as 'PGQL', for policy gradient and
Q-learning. We also establish an equivalency between action-value fitting
techniques and actor-critic algorithms, showing that regularized policy
gradient techniques can be interpreted as advantage function learning
algorithms. We conclude with some numerical examples that demonstrate improved
data efficiency and stability of PGQL. In particular, we tested PGQL on the
full suite of Atari games and achieved performance exceeding that of both
asynchronous advantage actor-critic (A3C) and Q-learning. | http://arxiv.org/pdf/1611.01626 | Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih | cs.LG, cs.AI, math.OC, stat.ML | null | null | cs.LG | 20161105 | 20170407 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1609.00150"
},
{
"id": "1512.04105"
},
{
"id": "1511.05952"
},
{
"id": "1504.00702"
}
] |
1611.01603 | 53 | Eqn. 1: bilinear. Bilinear α is deï¬ned as
a(h, u) = h' Wu (8) where Wj; ⬠R?¢*?4 is a trainable weight matrix. Bilinear term has been used by |Chen et al. (2016).
Eqn. 1: linear after MLP. We can also perform linear mapping after single layer of perceptron:
a(h, u) = wi, tanh(Wip[h; u] + Drip) (9)
where Wmlp and bmlp are trainable weight matrix and bias, respectively. Linear mapping after perceptron layer has been used by Hermann et al. (2015).
# Eqn. 2: MLP after concatenation. We can deï¬ne β as | 1611.01603#53 | Bidirectional Attention Flow for Machine Comprehension | Machine comprehension (MC), answering a query about a given context
paragraph, requires modeling complex interactions between the context and the
query. Recently, attention mechanisms have been successfully extended to MC.
Typically these methods use attention to focus on a small portion of the
context and summarize it with a fixed-size vector, couple attentions
temporally, and/or often form a uni-directional attention. In this paper we
introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage
hierarchical process that represents the context at different levels of
granularity and uses bi-directional attention flow mechanism to obtain a
query-aware context representation without early summarization. Our
experimental evaluations show that our model achieves the state-of-the-art
results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze
test. | http://arxiv.org/pdf/1611.01603 | Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi | cs.CL | Published as a conference paper at ICLR 2017 | null | cs.CL | 20161105 | 20180621 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1611.01604"
},
{
"id": "1609.05284"
},
{
"id": "1610.09996"
},
{
"id": "1606.01549"
},
{
"id": "1511.02274"
},
{
"id": "1505.00387"
},
{
"id": "1611.01436"
},
{
"id": "1611.01724"
},
{
"id": "1607.04423"
}
] |
1611.01626 | 53 | Brian Sallans and Geoffrey E Hinton. Reinforcement learning with factored states and actions. Journal of Machine Learning Research, 5(Aug):1063â1088, 2004.
Tom Schaul, John Quan, Ioannis Antonoglou, and David Silver. Prioritized experience replay. arXiv preprint arXiv:1511.05952, 2015.
John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. In Proceedings of The 32nd International Conference on Machine Learning, pp. 1889â1897, 2015.
David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Riedmiller. Deterministic policy gradient algorithms. In Proceedings of the 31st International Conference on Machine Learning (ICML), pp. 387â395, 2014.
David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484â489, 2016.
R. Sutton and A. Barto. Reinforcement Learning: an Introduction. MIT Press, 1998. | 1611.01626#53 | Combining policy gradient and Q-learning | Policy gradient is an efficient technique for improving a policy in a
reinforcement learning setting. However, vanilla online variants are on-policy
only and not able to take advantage of off-policy data. In this paper we
describe a new technique that combines policy gradient with off-policy
Q-learning, drawing experience from a replay buffer. This is motivated by
making a connection between the fixed points of the regularized policy gradient
algorithm and the Q-values. This connection allows us to estimate the Q-values
from the action preferences of the policy, to which we apply Q-learning
updates. We refer to the new technique as 'PGQL', for policy gradient and
Q-learning. We also establish an equivalency between action-value fitting
techniques and actor-critic algorithms, showing that regularized policy
gradient techniques can be interpreted as advantage function learning
algorithms. We conclude with some numerical examples that demonstrate improved
data efficiency and stability of PGQL. In particular, we tested PGQL on the
full suite of Atari games and achieved performance exceeding that of both
asynchronous advantage actor-critic (A3C) and Q-learning. | http://arxiv.org/pdf/1611.01626 | Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih | cs.LG, cs.AI, math.OC, stat.ML | null | null | cs.LG | 20161105 | 20170407 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1609.00150"
},
{
"id": "1512.04105"
},
{
"id": "1511.05952"
},
{
"id": "1504.00702"
}
] |
1611.01603 | 54 | # Eqn. 2: MLP after concatenation. We can deï¬ne β as
β(h, Ëu, Ëh) = max(0, Wmlp[h; Ëu; h ⦠Ëu; h ⦠Ëh] + bmlp) where Wmlp â R2dÃ8d and bmlp â R2d are trainable weight matrix and bias. This is equivalent to adding ReLU after linearly transforming the original deï¬nition of β. Since the output dimension of β changes, the input dimension of the ï¬rst LSTM of the modeling layer will change as well.
The results of these variations on the dev data of SQuAD are shown in Table 5. It is important to note that there are non-trivial gaps between our deï¬nition of α and other deï¬nitions employed by previous work. Adding MLP in β does not seem to help, yielding slightly worse result than β without MLP.
13 | 1611.01603#54 | Bidirectional Attention Flow for Machine Comprehension | Machine comprehension (MC), answering a query about a given context
paragraph, requires modeling complex interactions between the context and the
query. Recently, attention mechanisms have been successfully extended to MC.
Typically these methods use attention to focus on a small portion of the
context and summarize it with a fixed-size vector, couple attentions
temporally, and/or often form a uni-directional attention. In this paper we
introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage
hierarchical process that represents the context at different levels of
granularity and uses bi-directional attention flow mechanism to obtain a
query-aware context representation without early summarization. Our
experimental evaluations show that our model achieves the state-of-the-art
results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze
test. | http://arxiv.org/pdf/1611.01603 | Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi | cs.CL | Published as a conference paper at ICLR 2017 | null | cs.CL | 20161105 | 20180621 | [
{
"id": "1606.02245"
},
{
"id": "1608.07905"
},
{
"id": "1611.01604"
},
{
"id": "1609.05284"
},
{
"id": "1610.09996"
},
{
"id": "1606.01549"
},
{
"id": "1511.02274"
},
{
"id": "1505.00387"
},
{
"id": "1611.01436"
},
{
"id": "1611.01724"
},
{
"id": "1607.04423"
}
] |
1611.01626 | 54 | R. Sutton and A. Barto. Reinforcement Learning: an Introduction. MIT Press, 1998.
Richard S Sutton. Learning to predict by the methods of temporal differences. Machine learning, 3 (1):9â44, 1988.
Richard S Sutton, David A McAllester, Satinder P Singh, Yishay Mansour, et al. Policy gradient methods for reinforcement learning with function approximation. In Advances in Neural Infor- mation Processing Systems, volume 99, pp. 1057â1063, 1999.
Gerald Tesauro. Temporal difference learning and TD-Gammon. Communications of the ACM, 38 (3):58â68, 1995.
Philip Thomas. Bias in natural actor-critic algorithms. In Proceedings of The 31st International Conference on Machine Learning, pp. 441â448, 2014.
Hado Van Hasselt, Arthur Guez, and David Silver. Deep reinforcement learning with double Q- learning. In Proceedings of the Thirtieth AAAI Conference on Artiï¬cial Intelligence (AAAI-16), pp. 2094â2100, 2016.
13
Published as a conference paper at ICLR 2017 | 1611.01626#54 | Combining policy gradient and Q-learning | Policy gradient is an efficient technique for improving a policy in a
reinforcement learning setting. However, vanilla online variants are on-policy
only and not able to take advantage of off-policy data. In this paper we
describe a new technique that combines policy gradient with off-policy
Q-learning, drawing experience from a replay buffer. This is motivated by
making a connection between the fixed points of the regularized policy gradient
algorithm and the Q-values. This connection allows us to estimate the Q-values
from the action preferences of the policy, to which we apply Q-learning
updates. We refer to the new technique as 'PGQL', for policy gradient and
Q-learning. We also establish an equivalency between action-value fitting
techniques and actor-critic algorithms, showing that regularized policy
gradient techniques can be interpreted as advantage function learning
algorithms. We conclude with some numerical examples that demonstrate improved
data efficiency and stability of PGQL. In particular, we tested PGQL on the
full suite of Atari games and achieved performance exceeding that of both
asynchronous advantage actor-critic (A3C) and Q-learning. | http://arxiv.org/pdf/1611.01626 | Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih | cs.LG, cs.AI, math.OC, stat.ML | null | null | cs.LG | 20161105 | 20170407 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1609.00150"
},
{
"id": "1512.04105"
},
{
"id": "1511.05952"
},
{
"id": "1504.00702"
}
] |
1611.01626 | 55 | 13
Published as a conference paper at ICLR 2017
Harm Van Seijen, Hado Van Hasselt, Shimon Whiteson, and Marco Wiering. A theoretical and em- pirical analysis of expected sarsa. In 2009 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning, pp. 177â184. IEEE, 2009.
Yin-Hao Wang, Tzuu-Hseng S Li, and Chih-Jui Lin. Backward q-learning: The combination of sarsa algorithm and q-learning. Engineering Applications of Artiï¬cial Intelligence, 26(9):2184â2193, 2013.
Ziyu Wang, Tom Schaul, Matteo Hessel, Hado van Hasselt, Marc Lanctot, and Nando de Freitas. Dueling network architectures for deep reinforcement learning. In Proceedings of the 33rd Inter- national Conference on Machine Learning (ICML), pp. 1995â2003, 2016.
Christopher John Cornish Hellaby Watkins. Learning from delayed rewards. PhD thesis, University of Cambridge England, 1989.
Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229â256, 1992.
Ronald J Williams and Jing Peng. Function optimization using connectionist reinforcement learning algorithms. Connection Science, 3(3):241â268, 1991. | 1611.01626#55 | Combining policy gradient and Q-learning | Policy gradient is an efficient technique for improving a policy in a
reinforcement learning setting. However, vanilla online variants are on-policy
only and not able to take advantage of off-policy data. In this paper we
describe a new technique that combines policy gradient with off-policy
Q-learning, drawing experience from a replay buffer. This is motivated by
making a connection between the fixed points of the regularized policy gradient
algorithm and the Q-values. This connection allows us to estimate the Q-values
from the action preferences of the policy, to which we apply Q-learning
updates. We refer to the new technique as 'PGQL', for policy gradient and
Q-learning. We also establish an equivalency between action-value fitting
techniques and actor-critic algorithms, showing that regularized policy
gradient techniques can be interpreted as advantage function learning
algorithms. We conclude with some numerical examples that demonstrate improved
data efficiency and stability of PGQL. In particular, we tested PGQL on the
full suite of Atari games and achieved performance exceeding that of both
asynchronous advantage actor-critic (A3C) and Q-learning. | http://arxiv.org/pdf/1611.01626 | Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih | cs.LG, cs.AI, math.OC, stat.ML | null | null | cs.LG | 20161105 | 20170407 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1609.00150"
},
{
"id": "1512.04105"
},
{
"id": "1511.05952"
},
{
"id": "1504.00702"
}
] |
1611.01626 | 56 | Ronald J Williams and Jing Peng. Function optimization using connectionist reinforcement learning algorithms. Connection Science, 3(3):241â268, 1991.
# A PGQL BELLMAN RESIDUAL
Here we demonstrate that in the tabular case the Bellman residual of the induced Q-values for the PGQL updates of converges to zero as the temperature a decreases, which is the same guarantee as vanilla regularized policy gradient 2). We will use the notation that 7r is the policy at the fixed point of PGQL updates (14) for some a, i.â¬., Ty, « exp(Q** ), with induced Q-value function Q7*. First, note that we can apply the same argument as in to show that limg_59 |T*Q7= â Tr Qt || = 0 (the only difference is that we lack the property that Qt« is the fixed point of T7«). Secondly, from equation we can write Q** â Q⢠= n(T*Q⢠â Qâ¢). Combining these two facts we have
]Qr> â Qâ¢|| n\|T*Q7 â Q* || | 1611.01626#56 | Combining policy gradient and Q-learning | Policy gradient is an efficient technique for improving a policy in a
reinforcement learning setting. However, vanilla online variants are on-policy
only and not able to take advantage of off-policy data. In this paper we
describe a new technique that combines policy gradient with off-policy
Q-learning, drawing experience from a replay buffer. This is motivated by
making a connection between the fixed points of the regularized policy gradient
algorithm and the Q-values. This connection allows us to estimate the Q-values
from the action preferences of the policy, to which we apply Q-learning
updates. We refer to the new technique as 'PGQL', for policy gradient and
Q-learning. We also establish an equivalency between action-value fitting
techniques and actor-critic algorithms, showing that regularized policy
gradient techniques can be interpreted as advantage function learning
algorithms. We conclude with some numerical examples that demonstrate improved
data efficiency and stability of PGQL. In particular, we tested PGQL on the
full suite of Atari games and achieved performance exceeding that of both
asynchronous advantage actor-critic (A3C) and Q-learning. | http://arxiv.org/pdf/1611.01626 | Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih | cs.LG, cs.AI, math.OC, stat.ML | null | null | cs.LG | 20161105 | 20170407 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1609.00150"
},
{
"id": "1512.04105"
},
{
"id": "1511.05952"
},
{
"id": "1504.00702"
}
] |
1611.01626 | 57 | ]Qr> â Qâ¢|| n\|T*Q7 â Q* ||
Qâ¢|| n\|T*Q7 â Q* || . n\|T*Qâ¢* â TQ + TQ â Q⢠|| n(\|T*Q7%* â Tâ¢Q⢠|| + ||T*2Q7* â TQ" |I) n(\|T*Q7* â T° Q⢠|| + y|]Q7" â Q⢠||) n/(Lây)IIT*Q7* â TQ ||, IAIA IA Il
and so ||Q*= â Q7«|| + 0. as a â 0. Using this fact we have
|T*Q7 _ Qt I |T*Q"" _ To Qt +4 Tr Qt â Qt 4 Q⢠â Q⢠I 7 T*Qâ¢* â T° Q* | + ||T*Q⢠â T° Q⢠|| + []Q7 â Q⢠|] |T*Qr âT%Qr|| + (1+ 7)|1Q% âQ"| 3/1 = m)|T*Q â T*Q⢠|], AIAIA Il
which therefore also converges to zero in the limit. Finally we obtain | 1611.01626#57 | Combining policy gradient and Q-learning | Policy gradient is an efficient technique for improving a policy in a
reinforcement learning setting. However, vanilla online variants are on-policy
only and not able to take advantage of off-policy data. In this paper we
describe a new technique that combines policy gradient with off-policy
Q-learning, drawing experience from a replay buffer. This is motivated by
making a connection between the fixed points of the regularized policy gradient
algorithm and the Q-values. This connection allows us to estimate the Q-values
from the action preferences of the policy, to which we apply Q-learning
updates. We refer to the new technique as 'PGQL', for policy gradient and
Q-learning. We also establish an equivalency between action-value fitting
techniques and actor-critic algorithms, showing that regularized policy
gradient techniques can be interpreted as advantage function learning
algorithms. We conclude with some numerical examples that demonstrate improved
data efficiency and stability of PGQL. In particular, we tested PGQL on the
full suite of Atari games and achieved performance exceeding that of both
asynchronous advantage actor-critic (A3C) and Q-learning. | http://arxiv.org/pdf/1611.01626 | Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih | cs.LG, cs.AI, math.OC, stat.ML | null | null | cs.LG | 20161105 | 20170407 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1609.00150"
},
{
"id": "1512.04105"
},
{
"id": "1511.05952"
},
{
"id": "1504.00702"
}
] |
1611.01626 | 58 | which therefore also converges to zero in the limit. Finally we obtain
TQ" â Qe) = |T*Qt âT2Q% + T*Qe â Ge + Qe = Qâ¢| Il
= |T*Qt âT2Q% + T*Qe â Ge + Qe = Qâ¢| T*Qt â T*Q*|| + ||T*Q2 â Qe || + ]Q7* â Q7|| (L+ V[lQ7 â Q* || + |T*Q7> â Q⢠|], IAIA Il
which combined with the two previous results implies that lim, _,9 ||T*Q"* â Qâ* || = 0, as before.
14
Published as a conference paper at ICLR 2017
# B ATARI SCORES | 1611.01626#58 | Combining policy gradient and Q-learning | Policy gradient is an efficient technique for improving a policy in a
reinforcement learning setting. However, vanilla online variants are on-policy
only and not able to take advantage of off-policy data. In this paper we
describe a new technique that combines policy gradient with off-policy
Q-learning, drawing experience from a replay buffer. This is motivated by
making a connection between the fixed points of the regularized policy gradient
algorithm and the Q-values. This connection allows us to estimate the Q-values
from the action preferences of the policy, to which we apply Q-learning
updates. We refer to the new technique as 'PGQL', for policy gradient and
Q-learning. We also establish an equivalency between action-value fitting
techniques and actor-critic algorithms, showing that regularized policy
gradient techniques can be interpreted as advantage function learning
algorithms. We conclude with some numerical examples that demonstrate improved
data efficiency and stability of PGQL. In particular, we tested PGQL on the
full suite of Atari games and achieved performance exceeding that of both
asynchronous advantage actor-critic (A3C) and Q-learning. | http://arxiv.org/pdf/1611.01626 | Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih | cs.LG, cs.AI, math.OC, stat.ML | null | null | cs.LG | 20161105 | 20170407 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1609.00150"
},
{
"id": "1512.04105"
},
{
"id": "1511.05952"
},
{
"id": "1504.00702"
}
] |
1611.01626 | 59 | Game alien amidar assault asterix asteroids atlantis bank heist battle zone beam rider berzerk bowling boxing breakout centipede chopper command crazy climber defender demon attack double dunk enduro ï¬shing derby freeway frostbite gopher gravitar hero ice hockey jamesbond kangaroo krull kung fu master montezuma revenge ms pacman name this game phoenix pitfall pong private eye qbert riverraid road runner robotank seaquest skiing solaris space invaders star gunner surround tennis time pilot tutankham up n down venture video pinball wizard of wor yars revenge zaxxon A3C 38.43 68.69 854.64 191.69 24.37 15496.01 210.28 21.63 59.55 79.38 2.70 510.30 2341.13 50.22 61.13 510.25 475.93 4027.57 1250.00 9.94 140.84 -0.26 5.85 429.76 0.71 145.71 62.25 133.90 -0.94 736.30 182.34 -0.49 17.91 102.01 447.05 5.48 116.37 -0.88 186.91 107.25 603.11 15.71 3.81 54.27 | 1611.01626#59 | Combining policy gradient and Q-learning | Policy gradient is an efficient technique for improving a policy in a
reinforcement learning setting. However, vanilla online variants are on-policy
only and not able to take advantage of off-policy data. In this paper we
describe a new technique that combines policy gradient with off-policy
Q-learning, drawing experience from a replay buffer. This is motivated by
making a connection between the fixed points of the regularized policy gradient
algorithm and the Q-values. This connection allows us to estimate the Q-values
from the action preferences of the policy, to which we apply Q-learning
updates. We refer to the new technique as 'PGQL', for policy gradient and
Q-learning. We also establish an equivalency between action-value fitting
techniques and actor-critic algorithms, showing that regularized policy
gradient techniques can be interpreted as advantage function learning
algorithms. We conclude with some numerical examples that demonstrate improved
data efficiency and stability of PGQL. In particular, we tested PGQL on the
full suite of Atari games and achieved performance exceeding that of both
asynchronous advantage actor-critic (A3C) and Q-learning. | http://arxiv.org/pdf/1611.01626 | Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih | cs.LG, cs.AI, math.OC, stat.ML | null | null | cs.LG | 20161105 | 20170407 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1609.00150"
},
{
"id": "1512.04105"
},
{
"id": "1511.05952"
},
{
"id": "1504.00702"
}
] |
1611.01626 | 60 | 17.91 102.01 447.05 5.48 116.37 -0.88 186.91 107.25 603.11 15.71 3.81 54.27 27.05 188.65 756.60 28.29 145.58 270.74 224.76 1637.01 -1.76 3007.37 150.52 81.54 4.01 Q-learning 25.53 12.29 1695.21 98.53 5.32 13635.88 91.80 2.89 79.94 55.55 -7.09 299.49 3291.22 105.98 19.18 189.01 58.94 3449.27 91.35 9.94 -14.48 -0.13 10.71 9131.97 1.35 15.47 21.57 110.97 -0.94 3586.30 260.14 1.80 10.71 113.89 812.99 5.49 24.96 0.03 159.71 65.01 179.69 134.87 3.71 54.10 34.61 146.39 205.70 -1.51 -15.35 91.59 110.11 148.10 -1.76 4325.02 88.07 23.39 44.11 PGQL 46.70 71.00 2802.87 3790.08 50.23 16217.49 212.15 52.00 | 1611.01626#60 | Combining policy gradient and Q-learning | Policy gradient is an efficient technique for improving a policy in a
reinforcement learning setting. However, vanilla online variants are on-policy
only and not able to take advantage of off-policy data. In this paper we
describe a new technique that combines policy gradient with off-policy
Q-learning, drawing experience from a replay buffer. This is motivated by
making a connection between the fixed points of the regularized policy gradient
algorithm and the Q-values. This connection allows us to estimate the Q-values
from the action preferences of the policy, to which we apply Q-learning
updates. We refer to the new technique as 'PGQL', for policy gradient and
Q-learning. We also establish an equivalency between action-value fitting
techniques and actor-critic algorithms, showing that regularized policy
gradient techniques can be interpreted as advantage function learning
algorithms. We conclude with some numerical examples that demonstrate improved
data efficiency and stability of PGQL. In particular, we tested PGQL on the
full suite of Atari games and achieved performance exceeding that of both
asynchronous advantage actor-critic (A3C) and Q-learning. | http://arxiv.org/pdf/1611.01626 | Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih | cs.LG, cs.AI, math.OC, stat.ML | null | null | cs.LG | 20161105 | 20170407 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1609.00150"
},
{
"id": "1512.04105"
},
{
"id": "1511.05952"
},
{
"id": "1504.00702"
}
] |
1611.01626 | 61 | 88.07 23.39 44.11 PGQL 46.70 71.00 2802.87 3790.08 50.23 16217.49 212.15 52.00 155.71 92.85 3.85 902.77 2959.16 73.88 162.93 476.11 911.13 3994.49 1375.00 9.94 145.57 -0.13 5.71 2060.41 1.74 92.88 76.96 142.08 -0.75 557.44 254.42 -0.48 25.76 188.90 1507.07 5.49 116.37 -0.04 136.17 128.63 519.51 71.50 5.88 54.16 28.66 608.44 977.99 78.15 145.58 438.50 239.58 1484.43 -1.76 4743.68 325.39 252.83 224.89 | 1611.01626#61 | Combining policy gradient and Q-learning | Policy gradient is an efficient technique for improving a policy in a
reinforcement learning setting. However, vanilla online variants are on-policy
only and not able to take advantage of off-policy data. In this paper we
describe a new technique that combines policy gradient with off-policy
Q-learning, drawing experience from a replay buffer. This is motivated by
making a connection between the fixed points of the regularized policy gradient
algorithm and the Q-values. This connection allows us to estimate the Q-values
from the action preferences of the policy, to which we apply Q-learning
updates. We refer to the new technique as 'PGQL', for policy gradient and
Q-learning. We also establish an equivalency between action-value fitting
techniques and actor-critic algorithms, showing that regularized policy
gradient techniques can be interpreted as advantage function learning
algorithms. We conclude with some numerical examples that demonstrate improved
data efficiency and stability of PGQL. In particular, we tested PGQL on the
full suite of Atari games and achieved performance exceeding that of both
asynchronous advantage actor-critic (A3C) and Q-learning. | http://arxiv.org/pdf/1611.01626 | Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih | cs.LG, cs.AI, math.OC, stat.ML | null | null | cs.LG | 20161105 | 20170407 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1609.00150"
},
{
"id": "1512.04105"
},
{
"id": "1511.05952"
},
{
"id": "1504.00702"
}
] |
1611.01436 | 0 | 7 1 0 2
r a M 7 1 ] L C . s c [
2 v 6 3 4 1 0 . 1 1 6 1 : v i X r a
# LEARNING RECURRENT SPAN REPRESENTATIONS FOR EXTRACTIVE QUESTION ANSWERING
Kenton Leet, Shimi Salant*, Tom Kwiatkowksi', Ankur Parikh?, Dipanjan Das?, and Jonathan Berant*
[email protected], [email protected] {tomkwiat, aparikh, dipanjand}@google.com, [email protected]
tUniversity of Washington, Seattle, USA *Tel-Aviv University, Tel-Aviv, Israel tGoogle Research, New York, USA
# ABSTRACT | 1611.01436#0 | Learning Recurrent Span Representations for Extractive Question Answering | The reading comprehension task, that asks questions about a given evidence
document, is a central problem in natural language understanding. Recent
formulations of this task have typically focused on answer selection from a set
of candidates pre-defined manually or through the use of an external NLP
pipeline. However, Rajpurkar et al. (2016) recently released the SQuAD dataset
in which the answers can be arbitrary strings from the supplied text. In this
paper, we focus on this answer extraction task, presenting a novel model
architecture that efficiently builds fixed length representations of all spans
in the evidence document with a recurrent network. We show that scoring
explicit span representations significantly improves performance over other
approaches that factor the prediction into separate predictions about words or
start and end markers. Our approach improves upon the best published results of
Wang & Jiang (2016) by 5% and decreases the error of Rajpurkar et al.'s
baseline by > 50%. | http://arxiv.org/pdf/1611.01436 | Kenton Lee, Shimi Salant, Tom Kwiatkowski, Ankur Parikh, Dipanjan Das, Jonathan Berant | cs.CL, I.2.7 | null | null | cs.CL | 20161104 | 20170317 | [
{
"id": "1608.07905"
}
] |
1611.01368 | 1 | Yoav Goldberg Computer Science Department Bar Ilan University [email protected]
# Abstract
The success of long short-term memory (LSTM) neural networks in language process- ing is typically attributed to their ability to capture long-distance statistical regularities. Linguistic regularities are often sensitive to syntactic structure; can such dependencies be captured by LSTMs, which do not have ex- plicit structural representations? We begin ad- dressing this question using number agreement in English subject-verb dependencies. We probe the architectureâs grammatical compe- tence both using training objectives with an explicit grammatical target (number prediction, grammaticality judgments) and using language models. In the strongly supervised settings, the LSTM achieved very high overall accu- racy (less than 1% errors), but errors increased when sequential and structural information con- ï¬icted. The frequency of such errors rose sharply in the language-modeling setting. We conclude that LSTMs can capture a non-trivial amount of grammatical structure given targeted supervision, but stronger architectures may be required to further reduce errors; furthermore, the language modeling signal is insufï¬cient for capturing syntax-sensitive dependencies, and should be supplemented with more direct supervision if such dependencies need to be captured. | 1611.01368#1 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01436 | 1 | tUniversity of Washington, Seattle, USA *Tel-Aviv University, Tel-Aviv, Israel tGoogle Research, New York, USA
# ABSTRACT
The reading comprehension task, that asks questions about a given evidence docu- ment, is a central problem in natural language understanding. Recent formulations of this task have typically focused on answer selection from a set of candidates pre-deï¬ned manually or through the use of an external NLP pipeline. However, Rajpurkar et al. (2016) recently released the SQUAD dataset in which the an- swers can be arbitrary strings from the supplied text. In this paper, we focus on this answer extraction task, presenting a novel model architecture that efï¬ciently builds ï¬xed length representations of all spans in the evidence document with a re- current network. We show that scoring explicit span representations signiï¬cantly improves performance over other approaches that factor the prediction into sep- arate predictions about words or start and end markers. Our approach improves upon the best published results of Wang & Jiang (2016) by 5% and decreases the error of Rajpurkar et al.âs baseline by > 50%.
# INTRODUCTION | 1611.01436#1 | Learning Recurrent Span Representations for Extractive Question Answering | The reading comprehension task, that asks questions about a given evidence
document, is a central problem in natural language understanding. Recent
formulations of this task have typically focused on answer selection from a set
of candidates pre-defined manually or through the use of an external NLP
pipeline. However, Rajpurkar et al. (2016) recently released the SQuAD dataset
in which the answers can be arbitrary strings from the supplied text. In this
paper, we focus on this answer extraction task, presenting a novel model
architecture that efficiently builds fixed length representations of all spans
in the evidence document with a recurrent network. We show that scoring
explicit span representations significantly improves performance over other
approaches that factor the prediction into separate predictions about words or
start and end markers. Our approach improves upon the best published results of
Wang & Jiang (2016) by 5% and decreases the error of Rajpurkar et al.'s
baseline by > 50%. | http://arxiv.org/pdf/1611.01436 | Kenton Lee, Shimi Salant, Tom Kwiatkowski, Ankur Parikh, Dipanjan Das, Jonathan Berant | cs.CL, I.2.7 | null | null | cs.CL | 20161104 | 20170317 | [
{
"id": "1608.07905"
}
] |
1611.01368 | 2 | (Hochreiter and Schmidhuber, 1997) or gated recur- rent units (GRU) (Cho et al., 2014), has led to sig- niï¬cant gains in language modeling (Mikolov et al., 2010; Sundermeyer et al., 2012), parsing (Vinyals et al., 2015; Kiperwasser and Goldberg, 2016; Dyer et al., 2016), machine translation (Bahdanau et al., 2015) and other tasks.
The effectiveness of RNNs1 is attributed to their ability to capture statistical contingencies that may span an arbitrary number of words. The word France, for example, is more likely to occur somewhere in a sentence that begins with Paris than in a sentence that begins with Penguins. The fact that an arbitrary number of words can intervene between the mutually predictive words implies that they cannot be captured by models with a ï¬xed window such as n-gram mod- els, but can in principle be captured by RNNs, which do not have an architecturally ï¬xed limit on depen- dency length. | 1611.01368#2 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01436 | 2 | # INTRODUCTION
A primary goal of natural language processing is to develop systems that can answer questions about the contents of documents. The reading comprehension task is of practical interest â we want computers to be able to read the worldâs text and then answer our questions â and, since we believe it requires deep language understanding, it has also become a ï¬agship task in NLP research.
A number of reading comprehension datasets have been developed that focus on answer selection from a small set of alternatives deï¬ned by annotators (Richardson et al., 2013) or existing NLP pipelines that cannot be trained end-to-end (Hill et al., 2016; Hermann et al., 2015). Subsequently, the models proposed for this task have tended to make use of the limited set of candidates, basing their predictions on mention-level attention weights (Hermann et al., 2015), or centering classi- ï¬ers (Chen et al., 2016), or network memories (Hill et al., 2016) on candidate locations. | 1611.01436#2 | Learning Recurrent Span Representations for Extractive Question Answering | The reading comprehension task, that asks questions about a given evidence
document, is a central problem in natural language understanding. Recent
formulations of this task have typically focused on answer selection from a set
of candidates pre-defined manually or through the use of an external NLP
pipeline. However, Rajpurkar et al. (2016) recently released the SQuAD dataset
in which the answers can be arbitrary strings from the supplied text. In this
paper, we focus on this answer extraction task, presenting a novel model
architecture that efficiently builds fixed length representations of all spans
in the evidence document with a recurrent network. We show that scoring
explicit span representations significantly improves performance over other
approaches that factor the prediction into separate predictions about words or
start and end markers. Our approach improves upon the best published results of
Wang & Jiang (2016) by 5% and decreases the error of Rajpurkar et al.'s
baseline by > 50%. | http://arxiv.org/pdf/1611.01436 | Kenton Lee, Shimi Salant, Tom Kwiatkowski, Ankur Parikh, Dipanjan Das, Jonathan Berant | cs.CL, I.2.7 | null | null | cs.CL | 20161104 | 20170317 | [
{
"id": "1608.07905"
}
] |
1611.01368 | 3 | RNNs are sequence models: they do not explicitly incorporate syntactic structure. Indeed, many word co-occurrence statistics can be captured by treating the sentence as an unstructured list of words (Paris- France); it is therefore unsurprising that RNNs can learn them well. Other dependencies, however, are sensitive to the syntactic structure of the sentence (Chomsky, 1965; Everaert et al., 2015). To what extent can RNNs learn to model such phenomena based only on sequential cues?
# Introduction
Recurrent neural networks (RNNs) are highly effec- tive models of sequential data (Elman, 1990). The rapid adoption of RNNs in NLP systems in recent years, in particular of RNNs with gating mecha- nisms such as long short-term memory (LSTM) units
Previous research has shown that RNNs (in particu- lar LSTMs) can learn artiï¬cial context-free languages (Gers and Schmidhuber, 2001) as well as nesting and
1In this work we use the term RNN to refer to the entire class of sequential recurrent neural networks. Instances of the class include long short-term memory networks (LSTM) and the Simple Recurrent Network (SRN) due to Elman (1990). | 1611.01368#3 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01436 | 3 | Recently, Rajpurkar et al. (2016) released the less restricted SQUAD dataset1 that does not place any constraints on the set of allowed answers, other than that they should be drawn from the evidence document. Rajpurkar et al. proposed a baseline system that chooses answers from the constituents identiï¬ed by an existing syntactic parser. This allows them to prune the O(N 2) answer candidates in each document of length N , but it also effectively renders 20.7% of all questions unanswerable.
Subsequent work by Wang & Jiang (2016) signiï¬cantly improve upon this baseline by using an end- to-end neural network architecture to identify answer spans by labeling either individual words, or the start and end of the answer span. Both of these methods do not make independence assumptions about substructures, but they are susceptible to search errors due to greedy training and decoding.
1http://stanford-qa.com
1 | 1611.01436#3 | Learning Recurrent Span Representations for Extractive Question Answering | The reading comprehension task, that asks questions about a given evidence
document, is a central problem in natural language understanding. Recent
formulations of this task have typically focused on answer selection from a set
of candidates pre-defined manually or through the use of an external NLP
pipeline. However, Rajpurkar et al. (2016) recently released the SQuAD dataset
in which the answers can be arbitrary strings from the supplied text. In this
paper, we focus on this answer extraction task, presenting a novel model
architecture that efficiently builds fixed length representations of all spans
in the evidence document with a recurrent network. We show that scoring
explicit span representations significantly improves performance over other
approaches that factor the prediction into separate predictions about words or
start and end markers. Our approach improves upon the best published results of
Wang & Jiang (2016) by 5% and decreases the error of Rajpurkar et al.'s
baseline by > 50%. | http://arxiv.org/pdf/1611.01436 | Kenton Lee, Shimi Salant, Tom Kwiatkowski, Ankur Parikh, Dipanjan Das, Jonathan Berant | cs.CL, I.2.7 | null | null | cs.CL | 20161104 | 20170317 | [
{
"id": "1608.07905"
}
] |
1611.01368 | 4 | indentation in a programming language (Karpathy et al., 2016). The goal of the present work is to probe their ability to learn natural language hierarchical (syntactic) structures from a corpus without syntactic annotations. As a ï¬rst step, we focus on a particular dependency that is commonly regarded as evidence for hierarchical structure in human language: English subject-verb agreement, the phenomenon in which the form of a verb depends on whether the subject is singular or plural (the kids play but the kid plays; see additional details in Section 2). If an RNN-based model succeeded in learning this dependency, that would indicate that it can learn to approximate or even faithfully implement syntactic structure. | 1611.01368#4 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01436 | 4 | 1http://stanford-qa.com
1
In contrast, here we argue that it is beneï¬cial to simplify the decoding procedure by enumerating all possible answer spans. By explicitly representing each answer span, our model can be globally normalized during training and decoded exactly during evaluation. A naive approach to building the O(N 2) spans of up to length N would require a network that is cubic in size with respect to the passage length, and such a network would be untrainable. To overcome this, we present a novel neural architecture called RASOR that builds ï¬xed-length span representations, reusing recurrent computations for shared substructures. We demonstrate that directly classifying each of the competing spans, and training with global normalization over all possible spans, leads to a signiï¬cant increase in performance. In our experiments, we show an increase in performance over Wang & Jiang (2016) of 5% in terms of exact match to a reference answer, and 3.6% in terms of predicted answer F1 with respect to the reference. On both of these metrics, we close the gap between Rajpurkar et al.âs baseline and the human-performance upper-bound by > 50%.
2 EXTRACTIVE QUESTION ANSWERING
2.1 TASK DEFINITION | 1611.01436#4 | Learning Recurrent Span Representations for Extractive Question Answering | The reading comprehension task, that asks questions about a given evidence
document, is a central problem in natural language understanding. Recent
formulations of this task have typically focused on answer selection from a set
of candidates pre-defined manually or through the use of an external NLP
pipeline. However, Rajpurkar et al. (2016) recently released the SQuAD dataset
in which the answers can be arbitrary strings from the supplied text. In this
paper, we focus on this answer extraction task, presenting a novel model
architecture that efficiently builds fixed length representations of all spans
in the evidence document with a recurrent network. We show that scoring
explicit span representations significantly improves performance over other
approaches that factor the prediction into separate predictions about words or
start and end markers. Our approach improves upon the best published results of
Wang & Jiang (2016) by 5% and decreases the error of Rajpurkar et al.'s
baseline by > 50%. | http://arxiv.org/pdf/1611.01436 | Kenton Lee, Shimi Salant, Tom Kwiatkowski, Ankur Parikh, Dipanjan Das, Jonathan Berant | cs.CL, I.2.7 | null | null | cs.CL | 20161104 | 20170317 | [
{
"id": "1608.07905"
}
] |
1611.01368 | 5 | Our main interest is in whether LSTMs have the capacity to learn structural dependencies from a nat- ural corpus. We therefore begin by addressing this question under the most favorable conditions: train- ing with explicit supervision. In the setting with the strongest supervision, which we refer to as the num- ber prediction task, we train it directly on the task of guessing the number of a verb based on the words that preceded it (Sections 3 and 4). We further experiment with a grammaticality judgment training objective, in which we provide the model with full sentences an- notated as to whether or not they violate subject-verb number agreement, without an indication of the locus of the violation (Section 5). Finally, we trained the model without any grammatical supervision, using a language modeling objective (predicting the next word). | 1611.01368#5 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01436 | 5 | 2 EXTRACTIVE QUESTION ANSWERING
2.1 TASK DEFINITION
Extractive question answering systems take as input a question q = {qo,..., qn} and a passage of text p = {po,...,Pm} from which they predict a single answer span a = (Astart, Gena), fepresented as a pair of indices into p. Machine learned extractive question answering systems, such as the one presented here, learn a predictor function f(q, p) > a from a training dataset of (q, p, a) triples.
2.2 RELATED WORK | 1611.01436#5 | Learning Recurrent Span Representations for Extractive Question Answering | The reading comprehension task, that asks questions about a given evidence
document, is a central problem in natural language understanding. Recent
formulations of this task have typically focused on answer selection from a set
of candidates pre-defined manually or through the use of an external NLP
pipeline. However, Rajpurkar et al. (2016) recently released the SQuAD dataset
in which the answers can be arbitrary strings from the supplied text. In this
paper, we focus on this answer extraction task, presenting a novel model
architecture that efficiently builds fixed length representations of all spans
in the evidence document with a recurrent network. We show that scoring
explicit span representations significantly improves performance over other
approaches that factor the prediction into separate predictions about words or
start and end markers. Our approach improves upon the best published results of
Wang & Jiang (2016) by 5% and decreases the error of Rajpurkar et al.'s
baseline by > 50%. | http://arxiv.org/pdf/1611.01436 | Kenton Lee, Shimi Salant, Tom Kwiatkowski, Ankur Parikh, Dipanjan Das, Jonathan Berant | cs.CL, I.2.7 | null | null | cs.CL | 20161104 | 20170317 | [
{
"id": "1608.07905"
}
] |
1611.01368 | 6 | Our quantitative results (Section 4) and qualitative analysis (Section 7) indicate that most naturally oc- curring agreement cases in the Wikipedia corpus are easy: they can be resolved without syntactic informa- tion, based only on the sequence of nouns preceding the verb. This leads to high overall accuracy in all models. Most of our experiments focus on the super- vised number prediction model. The accuracy of this model was lower on harder cases, which require the model to encode or approximate structural informa- tion; nevertheless, it succeeded in recovering the ma- jority of agreement cases even when four nouns of the opposite number intervened between the subject and the verb (17% errors). Baseline models failed spec- tacularly on these hard cases, performing far below chance levels. Fine-grained analysis revealed that mistakes are much more common when no overt cues
to syntactic structure (in particular function words) are available, as is the case in noun-noun compounds and reduced relative clauses. This indicates that the number prediction model indeed managed to capture a decent amount of syntactic knowledge, but was overly reliant on function words. | 1611.01368#6 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01436 | 6 | 2.2 RELATED WORK
For the SQUAD dataset, the original paper from Rajpurkar et al. (2016) implemented a linear model with sparse features based on n-grams and part-of-speech tags present in the question and the can- didate answer. Other than lexical features, they also used syntactic information in the form of de- pendency paths to extract more general features. They set a strong baseline for following work and also presented an in depth analysis, showing that lexical and syntactic features contribute most strongly to their modelâs performance. Subsequent work by Wang & Jiang (2016) use an end-to-end neural network method that uses a Match-LSTM to model the question and the passage, and uses pointer networks (Vinyals et al., 2015) to extract the answer span from the passage. This model resorts to greedy decoding and falls short in terms of performance compared to our model (see Sec- tion 5 for more detail). While we only compare to published baselines, there are other unpublished competitive systems on the SQUAD leaderboard, as listed in footnote 4. | 1611.01436#6 | Learning Recurrent Span Representations for Extractive Question Answering | The reading comprehension task, that asks questions about a given evidence
document, is a central problem in natural language understanding. Recent
formulations of this task have typically focused on answer selection from a set
of candidates pre-defined manually or through the use of an external NLP
pipeline. However, Rajpurkar et al. (2016) recently released the SQuAD dataset
in which the answers can be arbitrary strings from the supplied text. In this
paper, we focus on this answer extraction task, presenting a novel model
architecture that efficiently builds fixed length representations of all spans
in the evidence document with a recurrent network. We show that scoring
explicit span representations significantly improves performance over other
approaches that factor the prediction into separate predictions about words or
start and end markers. Our approach improves upon the best published results of
Wang & Jiang (2016) by 5% and decreases the error of Rajpurkar et al.'s
baseline by > 50%. | http://arxiv.org/pdf/1611.01436 | Kenton Lee, Shimi Salant, Tom Kwiatkowski, Ankur Parikh, Dipanjan Das, Jonathan Berant | cs.CL, I.2.7 | null | null | cs.CL | 20161104 | 20170317 | [
{
"id": "1608.07905"
}
] |
1611.01368 | 7 | Error rates increased only mildly when we switched to more indirect supervision consisting only of sentence-level grammaticality annotations without an indication of the crucial verb. By contrast, the language model trained without explicit grammati- cal supervision performed worse than chance on the harder agreement prediction cases. Even a state-of- the-art large-scale language model (Jozefowicz et al., 2016) was highly sensitive to recent but struc- turally irrelevant nouns, making more than ï¬ve times as many mistakes as the number prediction model on these harder cases. These results suggest that explicit supervision is necessary for learning the agreement dependency using this architecture, limiting its plau- sibility as a model of child language acquisition (El- man, 1990). From a more applied perspective, this result suggests that for tasks in which it is desirable to capture syntactic dependencies (e.g., machine trans- lation or language generation), language modeling objectives should be supplemented by supervision signals that directly capture the desired behavior.
# 2 Background: Subject-Verb Agreement as Evidence for Syntactic Structure
The form of an English third-person present tense verb depends on whether the head of the syntactic subject is plural or singular:2
(1) The key is on the table.
a. b. *The key are on the table. c. *The keys is on the table. d. | 1611.01368#7 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01436 | 7 | A task that is closely related to extractive question answering is the Cloze task (Taylor, 1953), in which the goal is to predict a concealed span from a declarative sentence given a passage of supporting text. Recently, Hermann et al. (2015) presented a Cloze dataset in which the task is to predict the correct entity in an incomplete sentence given an abstractive summary of a news article. Hermann et al. also present various neural architectures to solve the problem. Although this dataset is large and varied in domain, recent analysis by Chen et al. (2016) shows that simple models can achieve close to the human upper bound. As noted by the authors of the SQUAD paper, the annotated answers in the SQUAD dataset are often spans that include non-entities and can be longer phrases, unlike the Cloze datasets, thus making the task more challenging. | 1611.01436#7 | Learning Recurrent Span Representations for Extractive Question Answering | The reading comprehension task, that asks questions about a given evidence
document, is a central problem in natural language understanding. Recent
formulations of this task have typically focused on answer selection from a set
of candidates pre-defined manually or through the use of an external NLP
pipeline. However, Rajpurkar et al. (2016) recently released the SQuAD dataset
in which the answers can be arbitrary strings from the supplied text. In this
paper, we focus on this answer extraction task, presenting a novel model
architecture that efficiently builds fixed length representations of all spans
in the evidence document with a recurrent network. We show that scoring
explicit span representations significantly improves performance over other
approaches that factor the prediction into separate predictions about words or
start and end markers. Our approach improves upon the best published results of
Wang & Jiang (2016) by 5% and decreases the error of Rajpurkar et al.'s
baseline by > 50%. | http://arxiv.org/pdf/1611.01436 | Kenton Lee, Shimi Salant, Tom Kwiatkowski, Ankur Parikh, Dipanjan Das, Jonathan Berant | cs.CL, I.2.7 | null | null | cs.CL | 20161104 | 20170317 | [
{
"id": "1608.07905"
}
] |
1611.01368 | 8 | (1) The key is on the table.
a. b. *The key are on the table. c. *The keys is on the table. d.
While in these examples the subjectâs head is adjacent to the verb, in general the two can be separated by some sentential material:3
2 Identifying the head of the subject is typically straightfor- ward. In what follows we will use the shorthand âthe subjectâ to refer to the head of the subject.
3In the examples, the subject and the corresponding verb are marked in boldface, agreement attractors are underlined and intervening nouns of the same number as the subject are marked in italics. Asterisks mark unacceptable sentences.
# (2) The keys to the cabinet are on the table.
Given a syntactic parse of the sentence and a verb, it is straightforward to identify the head of the subject that corresponds to that verb, and use that information to determine the number of the verb (Figure 1).
root nsubj det prep pobj det prep pobj det The keys to the cabinet are on the table | 1611.01368#8 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01436 | 8 | Another, more traditional line of work has focused on extractive question answering on sentences, where the task is to extract a sentence from a document, given a question. Relevant datasets include datasets from the annual TREC evaluations (Voorhees & Tice, 2000) and WikiQA (Yang et al., 2015), where the latter dataset speciï¬cally focused on Wikipedia passages. There has been a line of interesting recent publications using neural architectures, focused on this variety of extractive question answering (Tymoshenko et al., 2016; Wang et al., 2016, inter alia). These methods model the question and a candidate answer sentence, but do not focus on possible candidate answer spans that may contain the answer to the given question. In this work, we focus on the more challenging problem of extracting the precise answer span.
2
# 3 MODEL
We propose a model architecture called RASOR2 illustrated in Figure 1, that explicitly computes embedding representations for candidate answer spans. In most structured prediction problems (e.g. sequence labeling or parsing), the number of possible output structures is exponential in the input length, and computing representations for every candidate is prohibitively expensive. However, we exploit the simplicity of our task, where we can trivially and tractably enumerate all candidates. This facilitates an expressive model that computes joint representations of every answer span, that can be globally normalized during learning. | 1611.01436#8 | Learning Recurrent Span Representations for Extractive Question Answering | The reading comprehension task, that asks questions about a given evidence
document, is a central problem in natural language understanding. Recent
formulations of this task have typically focused on answer selection from a set
of candidates pre-defined manually or through the use of an external NLP
pipeline. However, Rajpurkar et al. (2016) recently released the SQuAD dataset
in which the answers can be arbitrary strings from the supplied text. In this
paper, we focus on this answer extraction task, presenting a novel model
architecture that efficiently builds fixed length representations of all spans
in the evidence document with a recurrent network. We show that scoring
explicit span representations significantly improves performance over other
approaches that factor the prediction into separate predictions about words or
start and end markers. Our approach improves upon the best published results of
Wang & Jiang (2016) by 5% and decreases the error of Rajpurkar et al.'s
baseline by > 50%. | http://arxiv.org/pdf/1611.01436 | Kenton Lee, Shimi Salant, Tom Kwiatkowski, Ankur Parikh, Dipanjan Das, Jonathan Berant | cs.CL, I.2.7 | null | null | cs.CL | 20161104 | 20170317 | [
{
"id": "1608.07905"
}
] |
1611.01368 | 9 | root nsubj det prep pobj det prep pobj det The keys to the cabinet are on the table
Figure 1: The form of the verb is determined by the head of the subject, which is directly connected to it via an nsubj edge. Other nouns that intervene between the head of the subject and the verb (here cabinet is such a noun) are irrelevant for determining the form of the verb and need to be ignored.
By contrast, models that are insensitive to structure may run into substantial difï¬culties capturing this de- pendency. One potential issue is that there is no limit to the complexity of the subject NP, and any number of sentence-level modiï¬ers and parentheticalsâand therefore an arbitrary number of wordsâcan appear between the subject and the verb:
The building on the far right thatâs quite old and run down is the Kilgore Bank Building.
This property of the dependency entails that it can- not be captured by an n-gram model with a ï¬xed n. RNNs are in principle able to capture dependencies of an unbounded length; however, it is an empirical question whether or not they will learn to do so in practice when trained on a natural corpus. | 1611.01368#9 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01436 | 9 | In order to compute these span representations, we must aggregate information from the passage and the question for every answer candidate. For the example in Figure 1, RASOR computes an embedding for the candidate answer spans: ï¬xed to, ï¬xed to the, to the, etc. A naive approach for these aggregations would require a network that is cubic in size with respect to the passage length. Instead, our model reduces this to a quadratic size by reusing recurrent computations for shared substructures (i.e. common passage words) from different spans.
Since the choice of answer span depends on the original question, we must incorporate this infor- mation into the computation of the span representation. We model this by augmenting the passage word embeddings with additional embedding representations of the question.
In this section, we motivate and describe the architecture for RASOR in a top-down manner.
3.1 SCORING ANSWER SPANS
The goal of our extractive question answering system is to predict the single best answer span among all candidates from the passage p, denoted as A(p). Therefore, we deï¬ne a probability distribution over all possible answer spans given the question q and passage p, and the predictor function ï¬nds the answer span with the maximum likelihood: | 1611.01436#9 | Learning Recurrent Span Representations for Extractive Question Answering | The reading comprehension task, that asks questions about a given evidence
document, is a central problem in natural language understanding. Recent
formulations of this task have typically focused on answer selection from a set
of candidates pre-defined manually or through the use of an external NLP
pipeline. However, Rajpurkar et al. (2016) recently released the SQuAD dataset
in which the answers can be arbitrary strings from the supplied text. In this
paper, we focus on this answer extraction task, presenting a novel model
architecture that efficiently builds fixed length representations of all spans
in the evidence document with a recurrent network. We show that scoring
explicit span representations significantly improves performance over other
approaches that factor the prediction into separate predictions about words or
start and end markers. Our approach improves upon the best published results of
Wang & Jiang (2016) by 5% and decreases the error of Rajpurkar et al.'s
baseline by > 50%. | http://arxiv.org/pdf/1611.01436 | Kenton Lee, Shimi Salant, Tom Kwiatkowski, Ankur Parikh, Dipanjan Das, Jonathan Berant | cs.CL, I.2.7 | null | null | cs.CL | 20161104 | 20170317 | [
{
"id": "1608.07905"
}
] |
1611.01368 | 10 | A more fundamental challenge that the depen- dency poses for structure-insensitive models is the possibility of agreement attraction errors (Bock and Miller, 1991). The correct form in (3) could be se- lected using simple heuristics such as âagree with the most recent nounâ, which are readily available to sequence models. In general, however, such heuris- tics are unreliable, since other nouns can intervene between the subject and the verb in the linear se- quence of the sentence. Those intervening nouns can have the same number as the subject, as in (4), or the opposite number as in (5)-(7):
Alluvial soils carried in the ï¬oodwaters add nutrients to the ï¬oodplains.
(5)
The only championship banners that are cur- rently displayed within the building are for national or NCAA Championships.
The length of the forewings is 12-13.
Yet the ratio of men who survive to the women and children who survive is not clear in this story.
Intervening nouns with the opposite number from the subject are called agreement attractors. The potential presence of agreement attractor entails that the model must identify the head of the syntactic subject that corresponds to a given verb in order to choose the correct inï¬ected form of that verb. | 1611.01368#10 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01436 | 10 | f (q, p) := argmax aâA(p) One might be tempted to introduce independence assumptions that would enable cheaper decoding. For example, this distribution can be modeled as (1) a product of conditionally independent distribu- tions (binary) for every word or (2) a product of conditionally independent distributions (over words) for the start and end indices of the answer span. However, we show in Section 5.2 that such inde- pendence assumptions hurt the accuracy of the model, and instead we only assume a ï¬xed-length representation ha of each candidate span that is scored and normalized with a softmax layer (Span score and Softmax in Figure 1):
a â A(p)
# Sa = Wa FFNN(ha) _ exp(Sa) = 5 Texplaa)
_ exp(Sa) P| 4B) = 5 Texplaa) ac A(p) (3)
where FFNN(·) denotes a fully connected feed-forward neural network that provides a non-linear mapping of its input embedding.
3.2 RASOR: RECURRENT SPAN REPRESENTATION | 1611.01436#10 | Learning Recurrent Span Representations for Extractive Question Answering | The reading comprehension task, that asks questions about a given evidence
document, is a central problem in natural language understanding. Recent
formulations of this task have typically focused on answer selection from a set
of candidates pre-defined manually or through the use of an external NLP
pipeline. However, Rajpurkar et al. (2016) recently released the SQuAD dataset
in which the answers can be arbitrary strings from the supplied text. In this
paper, we focus on this answer extraction task, presenting a novel model
architecture that efficiently builds fixed length representations of all spans
in the evidence document with a recurrent network. We show that scoring
explicit span representations significantly improves performance over other
approaches that factor the prediction into separate predictions about words or
start and end markers. Our approach improves upon the best published results of
Wang & Jiang (2016) by 5% and decreases the error of Rajpurkar et al.'s
baseline by > 50%. | http://arxiv.org/pdf/1611.01436 | Kenton Lee, Shimi Salant, Tom Kwiatkowski, Ankur Parikh, Dipanjan Das, Jonathan Berant | cs.CL, I.2.7 | null | null | cs.CL | 20161104 | 20170317 | [
{
"id": "1608.07905"
}
] |
1611.01368 | 11 | Given the difï¬culty in identifying the subject from the linear sequence of the sentence, dependencies such as subject-verb agreement serve as an argument for structured syntactic representations in humans (Everaert et al., 2015); they may challenge models such as RNNs that do not have pre-wired syntac- tic representations. We note that subject-verb num- ber agreement is only one of a number of structure- sensitive dependencies; other examples include nega- tive polarity items (e.g., any) and reï¬exive pronouns (herself ). Nonetheless, a modelâs success in learning subject-verb agreement would be highly suggestive of its ability to master hierarchical structure.
# 3 The Number Prediction Task
To what extent can a sequence model learn to be sensi- tive to the hierarchical structure of natural language? To study this question, we propose the number pre- diction task. In this task, the model sees the sentence up to but not including a present-tense verb, e.g.:
(8) The keys to the cabinet
It then needs to guess the number of the following verb (a binary choice, either PLURAL or SINGULAR). We examine variations on this task in Section 5. | 1611.01368#11 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01436 | 11 | 3.2 RASOR: RECURRENT SPAN REPRESENTATION
The previously defined probability distribution depends on the answer span representations, ha. When computing ha, we assume access to representations of individual passage words that have been augmented with a representation of the question. We denote these question-focused passage word embeddings as {pj,...,p%*,} and describe their creation in Section In order to reuse computation for shared substructures, we use a bidirectional LSTM (Hochreiter & Schmidhuber| allows us to simply concatenate the bidirectional LSTM (BiLSTM) outputs at the endpoints of a span to jointly encode its inside and outside information (Span embedding in Figure[I}: {py ,--+;Pim} = BILSTM({pj, --- Pm }) (4) | (starts Gend) ⬠A(p) (5) start
2An abbreviation for Recurrent Span Representations, pronounced as razor.
3
(2) | 1611.01436#11 | Learning Recurrent Span Representations for Extractive Question Answering | The reading comprehension task, that asks questions about a given evidence
document, is a central problem in natural language understanding. Recent
formulations of this task have typically focused on answer selection from a set
of candidates pre-defined manually or through the use of an external NLP
pipeline. However, Rajpurkar et al. (2016) recently released the SQuAD dataset
in which the answers can be arbitrary strings from the supplied text. In this
paper, we focus on this answer extraction task, presenting a novel model
architecture that efficiently builds fixed length representations of all spans
in the evidence document with a recurrent network. We show that scoring
explicit span representations significantly improves performance over other
approaches that factor the prediction into separate predictions about words or
start and end markers. Our approach improves upon the best published results of
Wang & Jiang (2016) by 5% and decreases the error of Rajpurkar et al.'s
baseline by > 50%. | http://arxiv.org/pdf/1611.01436 | Kenton Lee, Shimi Salant, Tom Kwiatkowski, Ankur Parikh, Dipanjan Das, Jonathan Berant | cs.CL, I.2.7 | null | null | cs.CL | 20161104 | 20170317 | [
{
"id": "1608.07905"
}
] |
1611.01368 | 12 | It then needs to guess the number of the following verb (a binary choice, either PLURAL or SINGULAR). We examine variations on this task in Section 5.
In order to perform well on this task, the model needs to encode the concepts of syntactic number and syntactic subjecthood: it needs to learn that some words are singular and others are plural, and to be able to identify the correct subject. As we have illustrated in Section 2, correctly identifying the subject that corresponds to a particular verb often requires sensitivity to hierarchical syntax.
Data: An appealing property of the number predic- tion task is that we can generate practically unlimited training and testing examples for this task by query- ing a corpus for sentences with present-tense verbs, and noting the number of the verb. Importantly, we do not need to correctly identify the subject in order to create a training or test example. We generated a corpus of â¼1.35 million number prediction problems based on Wikipedia, of which â¼121,500 (9%) were used for training, â¼13,500 (1%) for validation, and the remaining â¼1.21 million (90%) were reserved for testing.4 The large number of test sentences was necessary to ensure that we had a good variety of test sentences representing less common constructions (see Section 4).5 | 1611.01368#12 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01436 | 12 | 2An abbreviation for Recurrent Span Representations, pronounced as razor.
3
(2)
where BILSTM(-) denotes a BiLSTM over its input embedding sequence and p*â is the concatenation of forward and backward outputs at time-step 7. While the visualization in Figure}1]shows a single layer BiLSTM for simplicity, we use a multi-layer BiLSTM in our experiments. The concatenated output of each layer is used as input for the subsequent layer, allowing the upper layers to depend on the entire passage.
3.3 QUESTION-FOCUSED PASSAGE WORD EMBEDDING
Computing the question-focused passage word embeddings {pâ m} requires integrating ques- tion information into the passage. The architecture for this integration is ï¬exible and likely depends on the nature of the dataset. For the SQUAD dataset, we ï¬nd that both passage-aligned and passage- independent question representations are effective at incorporating this contextual information, and experiments will show that their beneï¬ts are complementary. To incorporate these question rep- resentations, we simply concatenate them with the passage word embeddings (Question-focused passage word embedding in Figure 1). | 1611.01436#12 | Learning Recurrent Span Representations for Extractive Question Answering | The reading comprehension task, that asks questions about a given evidence
document, is a central problem in natural language understanding. Recent
formulations of this task have typically focused on answer selection from a set
of candidates pre-defined manually or through the use of an external NLP
pipeline. However, Rajpurkar et al. (2016) recently released the SQuAD dataset
in which the answers can be arbitrary strings from the supplied text. In this
paper, we focus on this answer extraction task, presenting a novel model
architecture that efficiently builds fixed length representations of all spans
in the evidence document with a recurrent network. We show that scoring
explicit span representations significantly improves performance over other
approaches that factor the prediction into separate predictions about words or
start and end markers. Our approach improves upon the best published results of
Wang & Jiang (2016) by 5% and decreases the error of Rajpurkar et al.'s
baseline by > 50%. | http://arxiv.org/pdf/1611.01436 | Kenton Lee, Shimi Salant, Tom Kwiatkowski, Ankur Parikh, Dipanjan Das, Jonathan Berant | cs.CL, I.2.7 | null | null | cs.CL | 20161104 | 20170317 | [
{
"id": "1608.07905"
}
] |
1611.01368 | 13 | Model and baselines: We encode words as one- hot vectors: the model does not have access to the characters that make up the word. Those vectors are then embedded into a 50-dimensional vector space. An LSTM with 50 hidden units reads those embed- ding vectors in sequence; the state of the LSTM at the end of the sequence is then fed into a logistic regression classiï¬er. The network is trained6 in an end-to-end fashion, including the word embeddings.7 To isolate the effect of syntactic structure, we also consider a baseline which is exposed only to the nouns in the sentence, in the order in which they appeared originally, and is then asked to predict the number of the following verb. The goal of this base4We limited our search to sentences that were shorter than 50 words. Whenever a sentence had more than one subject-verb dependency, we selected one of the dependencies at random.
5Code and data are available at http://tallinzen. net/projects/lstm_agreement.
6The network was optimized using Adam (Kingma and Ba, 2015) and early stopping based on validation set error. We trained the number prediction model 20 times with different random initializations, and report accuracy averaged across all runs. The models described in Sections 5 and 6 are based on 10 runs, with the exception of the language model, which is slower to train and was trained once. | 1611.01368#13 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01436 | 13 | We use ï¬xed pretrained embeddings to represent question and passage words. Therefore, in the fol- lowing discussion, notation for the words are interchangeable with their embedding representations.
Question-independent passage word embedding The ï¬rst component simply looks up the pre- trained word embedding for the passage word, pi.
Passage-aligned question representation In this dataset, the question-passage pairs often contain large lexical overlap or similarity near the correct answer span. To encourage the model to exploit these similarities, we include a ï¬xed-length representation of the question based on soft-alignments with the passage word. The alignments are computed via neural attention (Bahdanau et al., 2014), and we use the variant proposed by Parikh et al. (2016), where attention scores are dot products between non-linear mappings of word embeddings.
1 ⤠j ⤠n (6)
# sij = FFNN(pi) · FFNN(qj) exp(sij) k=1 exp(sik)
exp(sij) . ay = Sa l<j<n (7) 0 ST exp(san)
n ails _ aij (8) j=l
Passage-independent question representation We also include a representation of the question that does not depend on the passage and is shared for all passage words. | 1611.01436#13 | Learning Recurrent Span Representations for Extractive Question Answering | The reading comprehension task, that asks questions about a given evidence
document, is a central problem in natural language understanding. Recent
formulations of this task have typically focused on answer selection from a set
of candidates pre-defined manually or through the use of an external NLP
pipeline. However, Rajpurkar et al. (2016) recently released the SQuAD dataset
in which the answers can be arbitrary strings from the supplied text. In this
paper, we focus on this answer extraction task, presenting a novel model
architecture that efficiently builds fixed length representations of all spans
in the evidence document with a recurrent network. We show that scoring
explicit span representations significantly improves performance over other
approaches that factor the prediction into separate predictions about words or
start and end markers. Our approach improves upon the best published results of
Wang & Jiang (2016) by 5% and decreases the error of Rajpurkar et al.'s
baseline by > 50%. | http://arxiv.org/pdf/1611.01436 | Kenton Lee, Shimi Salant, Tom Kwiatkowski, Ankur Parikh, Dipanjan Das, Jonathan Berant | cs.CL, I.2.7 | null | null | cs.CL | 20161104 | 20170317 | [
{
"id": "1608.07905"
}
] |
1611.01368 | 14 | 7The size of the vocabulary was capped at 10000 (after low- ercasing). Infrequent words were replaced with their part of speech (Penn Treebank tagset, which explicitly encodes number distinctions); this was the case for 9.6% of all tokens and 7.1% of the subjects.
line is to withhold the syntactic information carried by function words, verbs and other parts of speech. We explore two variations on this baseline: one that only receives common nouns (dogs, pipe), and an- other that also receives pronouns (he) and proper nouns (France). We refer to these as the noun-only baselines.
# 4 Number Prediction Results | 1611.01368#14 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01436 | 14 | Passage-independent question representation We also include a representation of the question that does not depend on the passage and is shared for all passage words.
Similar to the previous question representation, an attention score is computed via a dot-product, except the question word is compared to a universal learned embedding rather any particular passage word. Additionally, we incorporate contextual information with a BiLSTM before aggregating the outputs using this attention mechanism.
The goal is to generate a coarse-grained summary of the question that depends on word order. For- mally, the passage-independent question representation qindep is computed as follows:
= BILSTM(q) 8; = Wa FFNN(q;) exp(s;) gq = 7 Vihar exp(se)
{91,---+4n} = BILSTM(q) (9)
1 ⤠j ⤠n (10)
exp(s;) . gq = l<j<n (11) 7 Vihar exp(se)
n gine? = SP aja (12) j=l
j=1
This representation is a bidirectional generalization of the question representation recently proposed by Li et al. (2016) for a different question-answering task.
Given the above three components, the complete question-focused passage word embedding for pi is their concatenation: pâ
4
(9) | 1611.01436#14 | Learning Recurrent Span Representations for Extractive Question Answering | The reading comprehension task, that asks questions about a given evidence
document, is a central problem in natural language understanding. Recent
formulations of this task have typically focused on answer selection from a set
of candidates pre-defined manually or through the use of an external NLP
pipeline. However, Rajpurkar et al. (2016) recently released the SQuAD dataset
in which the answers can be arbitrary strings from the supplied text. In this
paper, we focus on this answer extraction task, presenting a novel model
architecture that efficiently builds fixed length representations of all spans
in the evidence document with a recurrent network. We show that scoring
explicit span representations significantly improves performance over other
approaches that factor the prediction into separate predictions about words or
start and end markers. Our approach improves upon the best published results of
Wang & Jiang (2016) by 5% and decreases the error of Rajpurkar et al.'s
baseline by > 50%. | http://arxiv.org/pdf/1611.01436 | Kenton Lee, Shimi Salant, Tom Kwiatkowski, Ankur Parikh, Dipanjan Das, Jonathan Berant | cs.CL, I.2.7 | null | null | cs.CL | 20161104 | 20170317 | [
{
"id": "1608.07905"
}
] |
1611.01368 | 15 | # 4 Number Prediction Results
Overall accuracy: Accuracy was very high over- all: the system made an incorrect number prediction only in 0.83% of the dependencies. The noun-only baselines performed signiï¬cantly worse: 4.2% errors for the common-nouns case and 4.5% errors for the all-nouns case. This suggests that function words, verbs and other syntactically informative elements play an important role in the modelâs ability to cor- rectly predict the verbâs number. However, while the noun-only baselines made more than four times as many mistakes as the number prediction system, their still-low absolute error rate indicates that around 95% of agreement dependencies can be captured based solely on the sequence of nouns preceding the verb. This is perhaps unsurprising: sentences are often short and the verb is often directly adjacent to the sub- ject, making the identiï¬cation of the subject simple. To gain deeper insight into the syntactic capabilities of the model, then, the rest of this section investigates its performance on more challenging dependencies.8 | 1611.01368#15 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01436 | 15 | Given the above three components, the complete question-focused passage word embedding for pi is their concatenation: pâ
4
(9)
Softmax Span score Hidden layer ï¬xed to ï¬xed to the to the to the turbine the turbine Span embedding Passage-level BiLSTM Question-focused passage word embedding ï¬xed to the turbine Passage-independent question representation (3) + Question-level BiLSTM What are stators attached to ? Passage-aligned question representation (1) ï¬xed + (2) | 1611.01436#15 | Learning Recurrent Span Representations for Extractive Question Answering | The reading comprehension task, that asks questions about a given evidence
document, is a central problem in natural language understanding. Recent
formulations of this task have typically focused on answer selection from a set
of candidates pre-defined manually or through the use of an external NLP
pipeline. However, Rajpurkar et al. (2016) recently released the SQuAD dataset
in which the answers can be arbitrary strings from the supplied text. In this
paper, we focus on this answer extraction task, presenting a novel model
architecture that efficiently builds fixed length representations of all spans
in the evidence document with a recurrent network. We show that scoring
explicit span representations significantly improves performance over other
approaches that factor the prediction into separate predictions about words or
start and end markers. Our approach improves upon the best published results of
Wang & Jiang (2016) by 5% and decreases the error of Rajpurkar et al.'s
baseline by > 50%. | http://arxiv.org/pdf/1611.01436 | Kenton Lee, Shimi Salant, Tom Kwiatkowski, Ankur Parikh, Dipanjan Das, Jonathan Berant | cs.CL, I.2.7 | null | null | cs.CL | 20161104 | 20170317 | [
{
"id": "1608.07905"
}
] |
1611.01368 | 16 | Distance: We ï¬rst examine whether the network shows evidence of generalizing to dependencies where the subject and the verb are far apart. We focus in this analysis on simpler cases where no nouns in- tervened between the subject and the verb. As Figure 2a shows, performance did not degrade considerably when the distance between the subject and the verb grew up to 15 words (there were very few longer dependencies). This indicates that the network gen- eralized the dependency from the common distances of 0 and 1 to rare distances of 10 and more.
Agreement attractors: We next examine how the modelâs error rate was affected by nouns that inter- vened between the subject and the verb in the linear
8These properties of the dependencies were identiï¬ed by parsing the test sentences using the parser described in Goldberg and Nivre (2012).
(a) (b) (c) (d) (e) (f) | 1611.01368#16 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01436 | 16 | Figure 1: A visualization of RASOR, where the question is âWhat are the stators attached to?â and the passage is â. . . ï¬xed to the turbine . . . â. The model constructs question-focused passage word embeddings by concate- nating (1) the original passage word embedding, (2) a passage-aligned representation of the question, and (3) a passage-independent representation of the question shared across all passage words. We use a BiLSTM over these concatenated embeddings to efï¬ciently recover embedding representations of all possible spans, which are then scored by the ï¬nal layer of the model.
3.4 LEARNING
Given the above model speciï¬cation, learning is straightforward. We simply maximize the log- likelihood of the correct answer candidates and backpropagate the errors end-to-end.
# 4 EXPERIMENTAL SETUP | 1611.01436#16 | Learning Recurrent Span Representations for Extractive Question Answering | The reading comprehension task, that asks questions about a given evidence
document, is a central problem in natural language understanding. Recent
formulations of this task have typically focused on answer selection from a set
of candidates pre-defined manually or through the use of an external NLP
pipeline. However, Rajpurkar et al. (2016) recently released the SQuAD dataset
in which the answers can be arbitrary strings from the supplied text. In this
paper, we focus on this answer extraction task, presenting a novel model
architecture that efficiently builds fixed length representations of all spans
in the evidence document with a recurrent network. We show that scoring
explicit span representations significantly improves performance over other
approaches that factor the prediction into separate predictions about words or
start and end markers. Our approach improves upon the best published results of
Wang & Jiang (2016) by 5% and decreases the error of Rajpurkar et al.'s
baseline by > 50%. | http://arxiv.org/pdf/1611.01436 | Kenton Lee, Shimi Salant, Tom Kwiatkowski, Ankur Parikh, Dipanjan Das, Jonathan Berant | cs.CL, I.2.7 | null | null | cs.CL | 20161104 | 20170317 | [
{
"id": "1608.07905"
}
] |
1611.01368 | 17 | (a) (b) (c) (d) (e) (f)
Figure 2: (a-d) Error rates of the LSTM number prediction model as a function of: (a) distance between the subject and the verb, in dependencies that have no intervening nouns; (b) presence and number of last intervening noun; (c) count of attractors in dependencies with homogeneous intervention; (d) presence of a relative clause with and without an overt relativizer in dependencies with homogeneous intervention and exactly one attractor. All error bars represent 95% binomial conï¬dence intervals.
(e-f) Additional plots: (e) count of attractors per dependency in the corpus (note that the y-axis is on a log scale); (f) embeddings of singular and plural nouns, projected onto their ï¬rst two principal components.
order of the sentence. We ï¬rst focus on whether or not there were any intervening nouns, and if there were, whether the number of the subject differed from the number of the last intervening nounâthe type of noun that would trip up the simple heuristic of agreeing with the most recent noun. | 1611.01368#17 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01436 | 17 | # 4 EXPERIMENTAL SETUP
We represent each of the words in the question and document using 300 dimensional GloVe embed- dings trained on a corpus of 840bn words (Pennington et al., 2014). These embeddings cover 200k words and all out of vocabulary (OOV) words are projected onto one of 1m randomly initialized 300d embeddings. We couple the input and forget gates in our LSTMs, as described in Greff et al. (2016), and we use a single dropout mask to apply dropout across all LSTM time-steps as proposed by Gal & Ghahramani (2016). Hidden layers in the feed forward neural networks use rectiï¬ed linear units (Nair & Hinton, 2010). Answer candidates are limited to spans with at most 30 words. | 1611.01436#17 | Learning Recurrent Span Representations for Extractive Question Answering | The reading comprehension task, that asks questions about a given evidence
document, is a central problem in natural language understanding. Recent
formulations of this task have typically focused on answer selection from a set
of candidates pre-defined manually or through the use of an external NLP
pipeline. However, Rajpurkar et al. (2016) recently released the SQuAD dataset
in which the answers can be arbitrary strings from the supplied text. In this
paper, we focus on this answer extraction task, presenting a novel model
architecture that efficiently builds fixed length representations of all spans
in the evidence document with a recurrent network. We show that scoring
explicit span representations significantly improves performance over other
approaches that factor the prediction into separate predictions about words or
start and end markers. Our approach improves upon the best published results of
Wang & Jiang (2016) by 5% and decreases the error of Rajpurkar et al.'s
baseline by > 50%. | http://arxiv.org/pdf/1611.01436 | Kenton Lee, Shimi Salant, Tom Kwiatkowski, Ankur Parikh, Dipanjan Das, Jonathan Berant | cs.CL, I.2.7 | null | null | cs.CL | 20161104 | 20170317 | [
{
"id": "1608.07905"
}
] |
1611.01368 | 18 | As Figure 2b shows, a last intervening noun of the same number as the subject increased error rates only moderately, from 0.4% to 0.7% in singular subjects and from 1% to 1.4% in plural subjects. On the other hand, when the last intervening noun was an agree- ment attractor, error rates increased by almost an order of magnitude (to 6.5% and 5.4% respectively). Note, however, that even an error rate of 6.5% is quite impressive considering uninformed strategies such as random guessing (50% error rate), always assigning the more common class label (32% error rate, since 32% of the subjects in our corpus are plu- ral) and the number-of-most-recent-noun heuristic (100% error rate). The noun-only LSTM baselines performed much worse in agreement attraction cases, with error rates of 46.4% (common nouns) and 40% (all nouns). | 1611.01368#18 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01436 | 18 | To choose the ï¬nal model conï¬guration, we ran grid searches over: the dimensionality of the LSTM hidden states; the width and depth of the feed forward neural networks; dropout for the LSTMs; the number of stacked LSTM layers (1, 2, 3); and the decay multiplier [0.9, 0.95, 1.0] with which we multiply the learning rate every 10k steps. The best model uses 50d LSTM states; two-layer BiLSTMs for the span encoder and the passage-independent question representation; dropout of 0.1 throughout; and a learning rate decay of 5% every 10k steps.
5
All models are implemented using TensorFlow3 and trained on the SQUAD training set using the ADAM (Kingma & Ba, 2015) optimizer with a mini-batch size of 4 and trained using 10 asyn- chronous training threads on a single machine.
# 5 RESULTS
We train on the 80k (question, passage, answer span) triples in the SQUAD training set and report results on the 10k examples in the SQUAD development and test sets.
All results are calculated using the ofï¬cial SQUAD evaluation script, which reports exact answer match and F1 overlap of the unigrams between the predicted answer and the closest labeled answer from the 3 reference answers given in the SQUAD development set. | 1611.01436#18 | Learning Recurrent Span Representations for Extractive Question Answering | The reading comprehension task, that asks questions about a given evidence
document, is a central problem in natural language understanding. Recent
formulations of this task have typically focused on answer selection from a set
of candidates pre-defined manually or through the use of an external NLP
pipeline. However, Rajpurkar et al. (2016) recently released the SQuAD dataset
in which the answers can be arbitrary strings from the supplied text. In this
paper, we focus on this answer extraction task, presenting a novel model
architecture that efficiently builds fixed length representations of all spans
in the evidence document with a recurrent network. We show that scoring
explicit span representations significantly improves performance over other
approaches that factor the prediction into separate predictions about words or
start and end markers. Our approach improves upon the best published results of
Wang & Jiang (2016) by 5% and decreases the error of Rajpurkar et al.'s
baseline by > 50%. | http://arxiv.org/pdf/1611.01436 | Kenton Lee, Shimi Salant, Tom Kwiatkowski, Ankur Parikh, Dipanjan Das, Jonathan Berant | cs.CL, I.2.7 | null | null | cs.CL | 20161104 | 20170317 | [
{
"id": "1608.07905"
}
] |
1611.01368 | 19 | We next tested whether the effect of attractors is cumulative, by focusing on dependencies with multi- ple attractors. To avoid cases in which the effect of an attractor is offset by an intervening noun with the same number as the subject, we restricted our search to dependencies in which all of the intervening nouns had the same number, which we term dependencies with homogeneous intervention. For example, (9) has homogeneous intervention whereas (10) does not:
The roses in the vase by the door are red.
The roses in the vase by the chairs are red.
Figure 2c shows that error rates increased gradually as more attractors intervened between the subject and the verb. Performance degraded quite slowly, how- ever: even with four attractors the error rate was only 17.6%. As expected, the noun-only baselines per- formed signiï¬cantly worse in this setting, reaching an error rate of up to 84% (worse than chance) in the case of four attractors. This conï¬rms that syntactic cues are critical for solving the harder cases. | 1611.01368#19 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01436 | 19 | # 5.1 COMPARISONS TO OTHER WORK
Our model with recurrent span representations (RASOR) is compared to all previously published systems 4. Rajpurkar et al. (2016) published a logistic regression baseline as well as human perfor- mance on the SQUAD task. The logistic regression baseline uses the output of an existing syntactic parser both as a constraint on the set of allowed answer spans, and as a method of creating sparse features for an answer-centric scoring model. Despite not having access to any external representa- tion of linguistic structure, RASOR achieves an error reduction of more than 50% over this baseline, both in terms of exact match and F1, relative to the human performance upper bound.
Dev Test System EM F1 EM F1 Logistic regression baseline Match-LSTM (Sequence) Match-LSTM (Boundary) RASOR Human 39.8 54.5 60.5 66.4 81.4 51.0 67.7 70.7 74.9 91.0 40.4 54.8 59.4 67.4 82.3 51.0 68.0 70.0 75.5 91.2
Table 1: Exact match (EM) and span F1 on SQUAD. | 1611.01436#19 | Learning Recurrent Span Representations for Extractive Question Answering | The reading comprehension task, that asks questions about a given evidence
document, is a central problem in natural language understanding. Recent
formulations of this task have typically focused on answer selection from a set
of candidates pre-defined manually or through the use of an external NLP
pipeline. However, Rajpurkar et al. (2016) recently released the SQuAD dataset
in which the answers can be arbitrary strings from the supplied text. In this
paper, we focus on this answer extraction task, presenting a novel model
architecture that efficiently builds fixed length representations of all spans
in the evidence document with a recurrent network. We show that scoring
explicit span representations significantly improves performance over other
approaches that factor the prediction into separate predictions about words or
start and end markers. Our approach improves upon the best published results of
Wang & Jiang (2016) by 5% and decreases the error of Rajpurkar et al.'s
baseline by > 50%. | http://arxiv.org/pdf/1611.01436 | Kenton Lee, Shimi Salant, Tom Kwiatkowski, Ankur Parikh, Dipanjan Das, Jonathan Berant | cs.CL, I.2.7 | null | null | cs.CL | 20161104 | 20170317 | [
{
"id": "1608.07905"
}
] |
1611.01368 | 20 | Relative clauses: We now look in greater detail into the networkâs performance when the words that intervened between the subject and verb contained a relative clause. Relative clauses with attractors are likely to be fairly challenging, for several rea- sons. They typically contain a verb that agrees with the attractor, reinforcing the misleading cue to noun number. The attractor is often itself a subject of an irrelevant verb, making a potential âagree with the most recent subjectâ strategy unreliable. Finally, the existence of a relative clause is sometimes not overtly indicated by a function word (relativizer), as in (11) (for comparison, see the minimally different (12)):
The landmarks this article lists here are also run-of-the-mill and not notable.
The landmarks that this article lists here are also run-of-the-mill and not notable. | 1611.01368#20 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01436 | 20 | Table 1: Exact match (EM) and span F1 on SQUAD.
More closely related to RASOR is the boundary model with Match-LSTMs and Pointer Networks by Wang & Jiang (2016). Their model similarly uses recurrent networks to learn embeddings of each passage word in the context of the question, and it can also capture interactions between endpoints, since the end index probability distribution is conditioned on the start index. However, both training and evaluation are greedy, making their system susceptible to search errors when decoding. In contrast, RASOR can efï¬ciently and explicitly model the quadratic number of possible answers, which leads to a 14% error reduction over the best performing Match-LSTM model.
5.2 MODEL VARIATIONS
We investigate two main questions in the following ablations and comparisons. (1) How important are the two methods of representing the question described in Section 3.3? (2) What is the impact of learning a loss function that accurately reï¬ects the span prediction task? | 1611.01436#20 | Learning Recurrent Span Representations for Extractive Question Answering | The reading comprehension task, that asks questions about a given evidence
document, is a central problem in natural language understanding. Recent
formulations of this task have typically focused on answer selection from a set
of candidates pre-defined manually or through the use of an external NLP
pipeline. However, Rajpurkar et al. (2016) recently released the SQuAD dataset
in which the answers can be arbitrary strings from the supplied text. In this
paper, we focus on this answer extraction task, presenting a novel model
architecture that efficiently builds fixed length representations of all spans
in the evidence document with a recurrent network. We show that scoring
explicit span representations significantly improves performance over other
approaches that factor the prediction into separate predictions about words or
start and end markers. Our approach improves upon the best published results of
Wang & Jiang (2016) by 5% and decreases the error of Rajpurkar et al.'s
baseline by > 50%. | http://arxiv.org/pdf/1611.01436 | Kenton Lee, Shimi Salant, Tom Kwiatkowski, Ankur Parikh, Dipanjan Das, Jonathan Berant | cs.CL, I.2.7 | null | null | cs.CL | 20161104 | 20170317 | [
{
"id": "1608.07905"
}
] |
1611.01368 | 21 | The landmarks this article lists here are also run-of-the-mill and not notable.
The landmarks that this article lists here are also run-of-the-mill and not notable.
For data sparsity reasons we restricted our attention to dependencies with a single attractor and no other intervening nouns. As Figure 2d shows, attraction errors were more frequent in dependencies with an overt relative clause (9.9% errors) than in dependen- cies without a relative clause (3.2%), and consider- ably more frequent when the relative clause was not introduced by an overt relativizer (25%). As in the case of multiple attractors, however, while the model struggled with the more difï¬cult dependencies, its performance was much better than random guessing, and slightly better than a majority-class strategy. | 1611.01368#21 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01436 | 21 | Question representations Table 2a shows the performance of RASOR when either of the two question representations described in Section 3.3 is removed. The passage-aligned question repre- sentation is crucial, since lexically similar regions of the passage provide strong signal for relevant answer spans. If the question is only integrated through the inclusion of a passage-independent rep- resentation, performance drops drastically. The passage-independent question representation over
3www.tensorflow.org 4As of submission, other unpublished systems are shown on the SQUAD leaderboard, including Match- LSTM with Ans-Ptr (Boundary+Ensemble), Co-attention, r-net, Match-LSTM with Bi-Ans-Ptr (Boundary), Co- attention old, Dynamic Chunk Reader, Dynamic Chunk Ranker with Convolution layer, Attentive Chunker.
6
the BiLSTM is less important, but it still accounts for over 3% exact match and F1. The input of both of these components is analyzed qualitatively in Section 6. | 1611.01436#21 | Learning Recurrent Span Representations for Extractive Question Answering | The reading comprehension task, that asks questions about a given evidence
document, is a central problem in natural language understanding. Recent
formulations of this task have typically focused on answer selection from a set
of candidates pre-defined manually or through the use of an external NLP
pipeline. However, Rajpurkar et al. (2016) recently released the SQuAD dataset
in which the answers can be arbitrary strings from the supplied text. In this
paper, we focus on this answer extraction task, presenting a novel model
architecture that efficiently builds fixed length representations of all spans
in the evidence document with a recurrent network. We show that scoring
explicit span representations significantly improves performance over other
approaches that factor the prediction into separate predictions about words or
start and end markers. Our approach improves upon the best published results of
Wang & Jiang (2016) by 5% and decreases the error of Rajpurkar et al.'s
baseline by > 50%. | http://arxiv.org/pdf/1611.01436 | Kenton Lee, Shimi Salant, Tom Kwiatkowski, Ankur Parikh, Dipanjan Das, Jonathan Berant | cs.CL, I.2.7 | null | null | cs.CL | 20161104 | 20170317 | [
{
"id": "1608.07905"
}
] |
1611.01368 | 22 | Word representations: We explored the 50- dimensional word representations acquired by the model by performing a principal component anal- ysis. We assigned a part-of-speech (POS) to each word based on the wordâs most common POS in the corpus. We only considered relatively ambiguous words, in which a single POS accounted for more than 90% of the wordâs occurrences in the corpus. Figure 2f shows that the ï¬rst principal component corresponded almost perfectly to the expected num- ber of the noun, suggesting that the model learned the number of speciï¬c words very well; recall that the model did not have access during training to noun number annotations or to morphological sufï¬xes such as -s that could be used to identify plurals.
Visualizing the networkâs activations: We start investigating the inner workings of the number pre- diction network by analyzing its activation in re- sponse to particular syntactic constructions. To sim- plify the analysis, we deviate from our practice in the rest of this paper and use constructed sentences.
We ï¬rst constructed sets of sentence preï¬xes based on the following patterns:
PP: The toy(s) of the boy(s)...
RC: The toy(s) that the boy(s)... | 1611.01368#22 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01436 | 22 | 6
the BiLSTM is less important, but it still accounts for over 3% exact match and F1. The input of both of these components is analyzed qualitatively in Section 6.
Question representation EM F1 Learning objective EM F1 Only passage-independent Only passage-aligned RASOR 48.7 63.1 66.4 56.6 71.3 74.9 Membership prediction BIO sequence prediction Endpoints prediction Span prediction w/ log loss 57.9 63.9 65.3 65.2 69.7 73.0 75.1 73.6 (a) Ablation of question representations. (b) Comparisons for different learning objectives given the same passage-level BiLSTM.
Table 2: Results for variations of the model architecture presented in Section 3.
Learning objectives Given a ï¬xed architecture that is capable of encoding the input question- passage pairs, there are many ways of setting up a learning objective to encourage the model to predict the correct span. In Table 2b, we provide comparisons of some alternatives (learned end-to- end) given only the passage-level BiLSTM from RASOR. In order to provide clean comparisons, we restrict the alternatives to objectives that are trained and evaluated with exact decoding. | 1611.01436#22 | Learning Recurrent Span Representations for Extractive Question Answering | The reading comprehension task, that asks questions about a given evidence
document, is a central problem in natural language understanding. Recent
formulations of this task have typically focused on answer selection from a set
of candidates pre-defined manually or through the use of an external NLP
pipeline. However, Rajpurkar et al. (2016) recently released the SQuAD dataset
in which the answers can be arbitrary strings from the supplied text. In this
paper, we focus on this answer extraction task, presenting a novel model
architecture that efficiently builds fixed length representations of all spans
in the evidence document with a recurrent network. We show that scoring
explicit span representations significantly improves performance over other
approaches that factor the prediction into separate predictions about words or
start and end markers. Our approach improves upon the best published results of
Wang & Jiang (2016) by 5% and decreases the error of Rajpurkar et al.'s
baseline by > 50%. | http://arxiv.org/pdf/1611.01436 | Kenton Lee, Shimi Salant, Tom Kwiatkowski, Ankur Parikh, Dipanjan Das, Jonathan Berant | cs.CL, I.2.7 | null | null | cs.CL | 20161104 | 20170317 | [
{
"id": "1608.07905"
}
] |
1611.01368 | 23 | PP: The toy(s) of the boy(s)...
RC: The toy(s) that the boy(s)...
These patterns differ by exactly one function word, which determines the type of the modiï¬er of the main clause subject: a prepositional phrase (PP) in the ï¬rst sentence and a relative clause (RC) in the second. In PP sentences the correct number of the upcoming verb is determined by the main clause subject toy(s); in RC sentences it is determined by the embedded subject boy(s). | 1611.01368#23 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01436 | 23 | The simplest alternative is to consider this task as binary classiï¬cation for every word (Membership prediction in Table 2b). In this baseline, we optimize the logistic loss for binary labels indicating whether passage words belong to the correct answer span. At prediction time, a valid span can be recovered in linear time by ï¬nding the maximum contiguous sum of scores.
Li et al. (2016) proposed a sequence-labeling scheme that is similar to the above baseline (BIO sequence prediction in Table 2b). We follow their proposed model and learn a conditional random ï¬eld (CRF) layer after the passage-level BiLSTM to model transitions between the different labels. At prediction time, a valid span can be recovered in linear time using Viterbi decoding, with hard transition constraints to enforce a single contiguous output.
We also consider a model that independently predicts the two endpoints of the answer span (End- points prediction in Table 2b). This model uses the softmax loss over passage words during learning. When decoding, we only need to enforce the constraint that the start index is no greater than the end index. Without the interactions between the endpoints, this can be computed in linear time. Note that this model has the same expressivity as RASOR if the span-level FFNN were removed. | 1611.01436#23 | Learning Recurrent Span Representations for Extractive Question Answering | The reading comprehension task, that asks questions about a given evidence
document, is a central problem in natural language understanding. Recent
formulations of this task have typically focused on answer selection from a set
of candidates pre-defined manually or through the use of an external NLP
pipeline. However, Rajpurkar et al. (2016) recently released the SQuAD dataset
in which the answers can be arbitrary strings from the supplied text. In this
paper, we focus on this answer extraction task, presenting a novel model
architecture that efficiently builds fixed length representations of all spans
in the evidence document with a recurrent network. We show that scoring
explicit span representations significantly improves performance over other
approaches that factor the prediction into separate predictions about words or
start and end markers. Our approach improves upon the best published results of
Wang & Jiang (2016) by 5% and decreases the error of Rajpurkar et al.'s
baseline by > 50%. | http://arxiv.org/pdf/1611.01436 | Kenton Lee, Shimi Salant, Tom Kwiatkowski, Ankur Parikh, Dipanjan Das, Jonathan Berant | cs.CL, I.2.7 | null | null | cs.CL | 20161104 | 20170317 | [
{
"id": "1608.07905"
}
] |
1611.01368 | 24 | We generated all four versions of each pattern, and repeated the process ten times with different lexical items (the house(s) of/that the girl(s), the computer(s) of/that the student(s), etc.), for a total of 80 sentences. The network made correct number predictions for all 40 PP sentences, but made three errors in RC sen- tences. We averaged the word-by-word activations across all sets of ten sentences that had the same com- bination of modiï¬er (PP or RC), ï¬rst noun number and second noun number. Plots of the activation of all 50 units are provided in the Appendix (Figure 5). Figure 3a highlights a unit (Unit 1) that shows a particularly clear pattern: it tracks the number of the main clause subject throughout the PP modiï¬er, resets when it reaches the relativizer that which intro- duces the RC modiï¬er, and then switches to tracking the number of the embedded subject.
To explore how the network deals with dependen- cies spanning a larger number of words, we tracked its activation during the processing of the following two sentences:9
The houses of/that the man from the ofï¬ce across the street...
The network made the correct prediction for the PP | 1611.01368#24 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01436 | 24 | Lastly, we compare with a model using the same architecture as RASOR but is trained with a binary logistic loss rather than a softmax loss over spans (Span prediction w/ logistic loss in Table 2b).
The trend in Table 2b shows that the model is better at leveraging the supervision as the learning objective more accurately reï¬ects the fundamental task at hand: determining the best answer span.
First, we observe general improvements when using labels that closely align with the task. For example, the labels for membership prediction simply happens to provide single contiguous spans in the supervision. The model must consider far more possible answers than it needs to (the power set of all words). The same problem holds for BIO sequence predictionâ the model must do additional work to learn the semantics of the BIO tags. On the other hand, in RASOR, the semantics of an answer span is naturally encoded by the set of labels. | 1611.01436#24 | Learning Recurrent Span Representations for Extractive Question Answering | The reading comprehension task, that asks questions about a given evidence
document, is a central problem in natural language understanding. Recent
formulations of this task have typically focused on answer selection from a set
of candidates pre-defined manually or through the use of an external NLP
pipeline. However, Rajpurkar et al. (2016) recently released the SQuAD dataset
in which the answers can be arbitrary strings from the supplied text. In this
paper, we focus on this answer extraction task, presenting a novel model
architecture that efficiently builds fixed length representations of all spans
in the evidence document with a recurrent network. We show that scoring
explicit span representations significantly improves performance over other
approaches that factor the prediction into separate predictions about words or
start and end markers. Our approach improves upon the best published results of
Wang & Jiang (2016) by 5% and decreases the error of Rajpurkar et al.'s
baseline by > 50%. | http://arxiv.org/pdf/1611.01436 | Kenton Lee, Shimi Salant, Tom Kwiatkowski, Ankur Parikh, Dipanjan Das, Jonathan Berant | cs.CL, I.2.7 | null | null | cs.CL | 20161104 | 20170317 | [
{
"id": "1608.07905"
}
] |
1611.01368 | 25 | The houses of/that the man from the ofï¬ce across the street...
The network made the correct prediction for the PP
9We simpliï¬ed this experiment in light of the relative robust- ness of the ï¬rst experiment to lexical items and to whether each of the nouns was singular or plural.
(a)
# oy
05
0.0
0.5
# e
(b) (c)
Figure 3: Word-by-word visualization of LSTM activation: (a) a unit that correctly predicts the number of an upcoming verb. This number is determined by the ï¬rst noun (X) when the modiï¬er is a prepositional phrase (PP) and by the second noun (Y) when it is an object relative clause (RC); (b) the evolution of the predictions in the case of a longer modiï¬er: the predictions correctly diverge at the embedded noun, but then incorrectly converge again; (c) the activation of four representative units over the course of the same sentences. | 1611.01368#25 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01436 | 25 | Second, we observe the importance of allowing interactions between the endpoints using the span- level FFNN. RASOR outperforms the endpoint prediction model by 1.1 in exact match, The interac- tion between endpoints enables RASOR to enforce consistency across its two substructures. While this does not provide improvements for predicting the correct region of the answer (captured by the F1 metric, which drops by 0.2), it is more likely to predict a clean answer span that matches human judgment exactly (captured by the exact-match metric).
7
# 6 ANALYSIS
Figure 2 shows how the performances of RASOR and the endpoint predictor introduced in Sec- tion 5.2 degrade as the lengths of their predictions increase. It is clear that explicitly modeling interactions between end markers is increasingly important as the span grows in length.
0.8 6 ; 2 £ 506 a 2 a F 5 o g 3 ; id é Y < > % 2 3 % 3 > Z g B04Y 7% ak 8 3 % \ 2 g Z vy Tt ReSoR Fa \ 5 3 Zz a \ 2 $077 ame \) F 2 ZG aso Ni Z Z ââ Endpoint EM Z 02 ZY Y G44, y0% ZGGR, Zz ZEGGGBo 123 4 5 67 8 >8 Answer Le! 3 ath | 1611.01436#25 | Learning Recurrent Span Representations for Extractive Question Answering | The reading comprehension task, that asks questions about a given evidence
document, is a central problem in natural language understanding. Recent
formulations of this task have typically focused on answer selection from a set
of candidates pre-defined manually or through the use of an external NLP
pipeline. However, Rajpurkar et al. (2016) recently released the SQuAD dataset
in which the answers can be arbitrary strings from the supplied text. In this
paper, we focus on this answer extraction task, presenting a novel model
architecture that efficiently builds fixed length representations of all spans
in the evidence document with a recurrent network. We show that scoring
explicit span representations significantly improves performance over other
approaches that factor the prediction into separate predictions about words or
start and end markers. Our approach improves upon the best published results of
Wang & Jiang (2016) by 5% and decreases the error of Rajpurkar et al.'s
baseline by > 50%. | http://arxiv.org/pdf/1611.01436 | Kenton Lee, Shimi Salant, Tom Kwiatkowski, Ankur Parikh, Dipanjan Das, Jonathan Berant | cs.CL, I.2.7 | null | null | cs.CL | 20161104 | 20170317 | [
{
"id": "1608.07905"
}
] |
1611.01368 | 26 | but not the RC sentence (as before, the correct pre- dictions are PLURAL for PP and SINGULAR for RC). Figure 3b shows that the network begins by mak- ing the correct prediction for RC immediately after that, but then falters: as the sentence goes on, the resetting effect of that diminishes. The activation time courses shown in Figure 3c illustrate that Unit 1, which identiï¬ed the subject correctly when the preï¬x was short, gradually forgets that it is in an embedded clause as the preï¬x grows longer. By contrast, Unit 0 shows a stable capacity to remember the current embedding status. Additional representative units shown in Figure 3c are Unit 46, which consistently stores the number of the main clause subject, and Unit 27, which tracks the number of the most recent noun, resetting at noun phrase boundaries.
While the interpretability of these patterns is en- couraging, our analysis only scratches the surface of the rich possibilities of a linguistically-informed analysis of a neural network trained to perform a syntax-sensitive task; we leave a more extensive in- vestigation for future work.
# 5 Alternative Training Objectives | 1611.01368#26 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01436 | 26 | Which = 8 == 8 a people brought i | forward : one 7 Me n the | | = earliest a examples of a | Gvil Disobedience a = What ll m= 5 = ff does i cvil disobedience protest San 7 : ? = * gues 2 >roogyyzesgey z esos igo gseegsâ Bae 5 s eSgiessâs 55 , : - 5 s
Figure 2: F1 and Exact Match (EM) accuracy of RASOR and the endpoint predictor baseline over different prediction lengths.
Figure 3: Attention masks from RASOR. Top predictions for the ï¬rst example are âEgyptiansâ, âEgyptians against the Britishâ, âBritishâ. Top predictions for the second are âunjust lawsâ, âwhat they deem to be unjust lawsâ, âlawsâ. | 1611.01436#26 | Learning Recurrent Span Representations for Extractive Question Answering | The reading comprehension task, that asks questions about a given evidence
document, is a central problem in natural language understanding. Recent
formulations of this task have typically focused on answer selection from a set
of candidates pre-defined manually or through the use of an external NLP
pipeline. However, Rajpurkar et al. (2016) recently released the SQuAD dataset
in which the answers can be arbitrary strings from the supplied text. In this
paper, we focus on this answer extraction task, presenting a novel model
architecture that efficiently builds fixed length representations of all spans
in the evidence document with a recurrent network. We show that scoring
explicit span representations significantly improves performance over other
approaches that factor the prediction into separate predictions about words or
start and end markers. Our approach improves upon the best published results of
Wang & Jiang (2016) by 5% and decreases the error of Rajpurkar et al.'s
baseline by > 50%. | http://arxiv.org/pdf/1611.01436 | Kenton Lee, Shimi Salant, Tom Kwiatkowski, Ankur Parikh, Dipanjan Das, Jonathan Berant | cs.CL, I.2.7 | null | null | cs.CL | 20161104 | 20170317 | [
{
"id": "1608.07905"
}
] |
1611.01368 | 27 | # 5 Alternative Training Objectives
The number prediction task followed a fully super- vised objective, in which the network identiï¬es the number of an upcoming verb based only on the words preceding the verb. This section proposes three objec- tives that modify some of the goals and assumptions of the number prediction objective (see Table 1 for an overview).
Verb inï¬ection: This objective is similar to num- ber prediction, with one difference: the network re- ceives not only the words leading up to the verb, but also the singular form of the upcoming verb (e.g., writes). In practice, then, the network needs to decide between the singular and plural forms of a particular verb (writes or write). Having access to the semantics of the verb can help the network identify the noun that serves as its subject without using the syntactic subjecthood criteria. For example, in the following sentence:
People from the capital often eat pizza.
Sample input Training signal Prediction task Correct answer SINGULAR/PLURAL? SINGULAR/PLURAL? PLURAL PLURAL The keys to the cabinet [is/are] The keys to the cabinet are here. GRAMMATICAL GRAMMATICAL/UNGRAMMATICAL? GRAMMATICAL The keys to the cabinet PLURAL PLURAL P (are) > P (is)? are True | 1611.01368#27 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01436 | 27 | Figure 3 shows attention masks for both of RASORâs question representations. The passage- independent question representation pays most attention to the words that could attach to the answer in the passage (âbroughtâ, âagainstâ) or describe the answer category (âpeopleâ). Meanwhile, the passage-aligned question representation pays attention to similar words. The top predictions for both examples are all valid syntactic constituents, and they all have the correct semantic category. How- ever, RASOR assigns almost as much probability mass to itâs incorrect third prediction âBritishâ as it does to the top scoring correct prediction âEgyptianâ. This showcases a common failure case for RASOR, where it can ï¬nd an answer of the correct type close to a phrase that overlaps with the question â but it cannot accurately represent the semantic dependency on that phrase.
# 7 CONCLUSION | 1611.01436#27 | Learning Recurrent Span Representations for Extractive Question Answering | The reading comprehension task, that asks questions about a given evidence
document, is a central problem in natural language understanding. Recent
formulations of this task have typically focused on answer selection from a set
of candidates pre-defined manually or through the use of an external NLP
pipeline. However, Rajpurkar et al. (2016) recently released the SQuAD dataset
in which the answers can be arbitrary strings from the supplied text. In this
paper, we focus on this answer extraction task, presenting a novel model
architecture that efficiently builds fixed length representations of all spans
in the evidence document with a recurrent network. We show that scoring
explicit span representations significantly improves performance over other
approaches that factor the prediction into separate predictions about words or
start and end markers. Our approach improves upon the best published results of
Wang & Jiang (2016) by 5% and decreases the error of Rajpurkar et al.'s
baseline by > 50%. | http://arxiv.org/pdf/1611.01436 | Kenton Lee, Shimi Salant, Tom Kwiatkowski, Ankur Parikh, Dipanjan Das, Jonathan Berant | cs.CL, I.2.7 | null | null | cs.CL | 20161104 | 20170317 | [
{
"id": "1608.07905"
}
] |
1611.01368 | 28 | Table 1: Examples of the four training objectives and corresponding prediction tasks.
only people is a plausible subject for eat; the network can use this information to infer that the correct form of the verb is eat is rather than eats.
This objective is similar to the task that humans face during language production: after the speaker has decided to use a particular verb (e.g., write), he or she needs to decide whether its form will be write or writes (Levelt et al., 1999; Staub, 2009).
Grammaticality judgments: The previous objec- tives explicitly indicate the location in the sentence in which a verb can appear, giving the network a cue to syntactic clause boundaries. They also explicitly di- rect the networkâs attention to the number of the verb. As a form of weaker supervision, we experimented with a grammaticality judgment objective. In this sce- nario, the network is given a complete sentence, and is asked to judge whether or not it is grammatical.
attend to the number of the verb. In the network that implements this training scenario, RNN activation after each word is fed into a fully connected dense layer followed by a softmax layer over the entire vocabulary. | 1611.01368#28 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01436 | 28 | # 7 CONCLUSION
We have shown a novel approach for perform extractive question answering on the SQUAD dataset by explicitly representing and scoring answer span candidates. The core of our model relies on a recurrent network that enables shared computation for the shared substructure across span candi- dates. We explore different methods of encoding the passage and question, showing the beneï¬ts of including both passage-independent and passage-aligned question representations. While we show that this encoding method is beneï¬cial for the task, this is orthogonal to the core contribution of efï¬ciently computing span representation. In future work, we plan to explore alternate architectures that provide input to the recurrent span representations.
# REFERENCES
Dzmitry Bahdanau, KyungHyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
Danqi Chen, Jason Bolton, and Christopher D. Manning. A thorough examination of the cnn/daily mail reading comprehension task. In Proceedings of ACL, 2016.
Yarin Gal and Zoubin Ghahramani. A theoretically grounded application of dropout in recurrent neural networks. Proceedings of NIPS, 2016.
8 | 1611.01436#28 | Learning Recurrent Span Representations for Extractive Question Answering | The reading comprehension task, that asks questions about a given evidence
document, is a central problem in natural language understanding. Recent
formulations of this task have typically focused on answer selection from a set
of candidates pre-defined manually or through the use of an external NLP
pipeline. However, Rajpurkar et al. (2016) recently released the SQuAD dataset
in which the answers can be arbitrary strings from the supplied text. In this
paper, we focus on this answer extraction task, presenting a novel model
architecture that efficiently builds fixed length representations of all spans
in the evidence document with a recurrent network. We show that scoring
explicit span representations significantly improves performance over other
approaches that factor the prediction into separate predictions about words or
start and end markers. Our approach improves upon the best published results of
Wang & Jiang (2016) by 5% and decreases the error of Rajpurkar et al.'s
baseline by > 50%. | http://arxiv.org/pdf/1611.01436 | Kenton Lee, Shimi Salant, Tom Kwiatkowski, Ankur Parikh, Dipanjan Das, Jonathan Berant | cs.CL, I.2.7 | null | null | cs.CL | 20161104 | 20170317 | [
{
"id": "1608.07905"
}
] |
1611.01368 | 29 | We evaluate the knowledge that the network has acquired about subject-verb noun agreement using a task similar to the verb inï¬ection task. To per- form the task, we compare the probabilities that the model assigns to the two forms of the verb that in fact occurred in the corpus (e.g., write and writes), and select the form with the higher probability.11 As this task is not part of the networkâs training objec- tive, and the model needs to allocate considerable resources to predicting each word in the sentence, we expect the LM to perform worse than the explicitly supervised objectives.
To train the network, we made half of the examples in our training corpus ungrammatical by ï¬ipping the number of the verb.10 The network read the entire sentence and received a supervision signal at the end. This task is modeled after a common human data col- lection technique in linguistics (Sch¨utze, 1996), al- though our training regime is of course very different to the training that humans are exposed to: humans rarely receive ungrammatical sentences labeled as such (Bowerman, 1988). | 1611.01368#29 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01436 | 29 | Yarin Gal and Zoubin Ghahramani. A theoretically grounded application of dropout in recurrent neural networks. Proceedings of NIPS, 2016.
8
Klaus Greff, Rupesh Kumar Srivastava, Jan Koutn´ık, Bas R. Steunebrink, and J¨urgen Schmidhuber. LSTM: A search space odyssey. IEEE Transactions on Neural Networks and Learning Systems, PP:1â11, 2016.
Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. In Proceedings of NIPS, 2015.
Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. The goldilocks principle: Reading childrenâs books with explicit memory representations. In Proceedings of ICLR, 2016.
Sepp Hochreiter and J¨urgen Schmidhuber. Long Short-term Memory. Neural computation, 9(8): 1735â1780, 1997.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. Proceedings of ICLR, 2015. | 1611.01436#29 | Learning Recurrent Span Representations for Extractive Question Answering | The reading comprehension task, that asks questions about a given evidence
document, is a central problem in natural language understanding. Recent
formulations of this task have typically focused on answer selection from a set
of candidates pre-defined manually or through the use of an external NLP
pipeline. However, Rajpurkar et al. (2016) recently released the SQuAD dataset
in which the answers can be arbitrary strings from the supplied text. In this
paper, we focus on this answer extraction task, presenting a novel model
architecture that efficiently builds fixed length representations of all spans
in the evidence document with a recurrent network. We show that scoring
explicit span representations significantly improves performance over other
approaches that factor the prediction into separate predictions about words or
start and end markers. Our approach improves upon the best published results of
Wang & Jiang (2016) by 5% and decreases the error of Rajpurkar et al.'s
baseline by > 50%. | http://arxiv.org/pdf/1611.01436 | Kenton Lee, Shimi Salant, Tom Kwiatkowski, Ankur Parikh, Dipanjan Das, Jonathan Berant | cs.CL, I.2.7 | null | null | cs.CL | 20161104 | 20170317 | [
{
"id": "1608.07905"
}
] |
1611.01368 | 30 | Language modeling (LM): Finally, we experi- mented with a word prediction objective, in which the model did not receive any grammatically relevant supervision (Elman, 1990; Elman, 1991). In this sce- nario, the goal of the network is to predict the next word at each point in every sentence. It receives un- labeled sentences and is not speciï¬cally instructed to
Results: When considering all agreement depen- dencies, all models achieved error rates below 7% (Figure 4a); as mentioned above, even the noun-only number prediction baselines achieved error rates be- low 5% on this task. At the same time, there were large differences in accuracy across training objec- tives. The verb inï¬ection network performed slightly but signiï¬cantly better than the number prediction one (0.8% compared to 0.83% errors), suggesting that the semantic information carried by the verb is moderately helpful. The grammaticality judgment objective performed somewhat worse, at 2.5% errors, but still outperformed the noun-only baselines by a large margin, showing the capacity of the LSTM ar- chitecture to learn syntactic dependencies even given fairly indirect evidence.
The worst performer was the language model. It | 1611.01368#30 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01436 | 30 | Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. Proceedings of ICLR, 2015.
Peng Li, Wei Li, Zhengyan He, Xuguang Wang, Ying Cao, Jie Zhou, and Wei Xu. Dataset and neural recurrent sequence labeling model for open-domain factoid question answering. CoRR, abs/1607.06275, 2016.
Vinod Nair and Geoffrey E Hinton. Rectiï¬ed linear units improve restricted boltzmann machines. In Proceedings of ICML, 2010.
Ankur P Parikh, Oscar T¨ackstr¨om, Dipanjan Das, and Jakob Uszkoreit. A decomposable attention model for natural language inference. In Proceedings of EMNLP, 2016.
Jeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for word representation. In Proceedings of EMNLP, 2014.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. SQuAD: 100, 000+ questions for machine comprehension of text. In Proceedings of EMNLP, 2016.
Matthew Richardson, Christopher JC Burges, and Erin Renshaw. Mctest: A challenge dataset for the open-domain machine comprehension of text. In Proceedings of EMNLP, 2013. | 1611.01436#30 | Learning Recurrent Span Representations for Extractive Question Answering | The reading comprehension task, that asks questions about a given evidence
document, is a central problem in natural language understanding. Recent
formulations of this task have typically focused on answer selection from a set
of candidates pre-defined manually or through the use of an external NLP
pipeline. However, Rajpurkar et al. (2016) recently released the SQuAD dataset
in which the answers can be arbitrary strings from the supplied text. In this
paper, we focus on this answer extraction task, presenting a novel model
architecture that efficiently builds fixed length representations of all spans
in the evidence document with a recurrent network. We show that scoring
explicit span representations significantly improves performance over other
approaches that factor the prediction into separate predictions about words or
start and end markers. Our approach improves upon the best published results of
Wang & Jiang (2016) by 5% and decreases the error of Rajpurkar et al.'s
baseline by > 50%. | http://arxiv.org/pdf/1611.01436 | Kenton Lee, Shimi Salant, Tom Kwiatkowski, Ankur Parikh, Dipanjan Das, Jonathan Berant | cs.CL, I.2.7 | null | null | cs.CL | 20161104 | 20170317 | [
{
"id": "1608.07905"
}
] |
1611.01368 | 31 | The worst performer was the language model. It
10In some sentences this will not in fact result in an ungram- matical sentence, e.g. with collective nouns such as group, which are compatible with both singular and plural verbs in some di- alects of English (Huddleston and Pullum, 2002); those cases appear to be rare.
11One could also imagine performing the equivalent of the number prediction task by aggregating LM probability mass over all plural verbs and all singular verbs. This approach may be more severely affected by part-of-speech ambiguous words than the one we adopted; we leave the exploration of this approach to future work.
(b) (d) (e)
(a)
(c)
. | 1611.01368#31 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01436 | 31 | Matthew Richardson, Christopher JC Burges, and Erin Renshaw. Mctest: A challenge dataset for the open-domain machine comprehension of text. In Proceedings of EMNLP, 2013.
Wilson Taylor. Cloze procedure: A new tool for measuring readability. Journalism Quarterly, 30: 415â433, 1953.
Kateryna Tymoshenko, Daniele Bonadiman, and Alessandro Moschitti. Convolutional neural net- works vs. convolution kernels: Feature engineering for answer sentence reranking. In Proceedings of NAACL, 2016.
Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks. In Proceedings of NIPS, 2015.
Ellen M. Voorhees and Dawn M. Tice. Building a question answering test collection. In Proceedings of SIGIR, 2000.
Bingning Wang, Kang Liu, and Jun Zhao. Inner attention based recurrent neural networks for answer selection. In Proceedings of ACL, 2016.
Shuohang Wang and Jing Jiang. Machine comprehension using match-lstm and answer pointer. arXiv preprint arXiv:1608.07905, 2016.
Yi Yang, Wen-tau Yih, and Christopher Meek. Wikiqa: A challenge dataset for open-domain ques- tion answering. In Proceedings of EMNLP, 2015.
9 | 1611.01436#31 | Learning Recurrent Span Representations for Extractive Question Answering | The reading comprehension task, that asks questions about a given evidence
document, is a central problem in natural language understanding. Recent
formulations of this task have typically focused on answer selection from a set
of candidates pre-defined manually or through the use of an external NLP
pipeline. However, Rajpurkar et al. (2016) recently released the SQuAD dataset
in which the answers can be arbitrary strings from the supplied text. In this
paper, we focus on this answer extraction task, presenting a novel model
architecture that efficiently builds fixed length representations of all spans
in the evidence document with a recurrent network. We show that scoring
explicit span representations significantly improves performance over other
approaches that factor the prediction into separate predictions about words or
start and end markers. Our approach improves upon the best published results of
Wang & Jiang (2016) by 5% and decreases the error of Rajpurkar et al.'s
baseline by > 50%. | http://arxiv.org/pdf/1611.01436 | Kenton Lee, Shimi Salant, Tom Kwiatkowski, Ankur Parikh, Dipanjan Das, Jonathan Berant | cs.CL, I.2.7 | null | null | cs.CL | 20161104 | 20170317 | [
{
"id": "1608.07905"
}
] |
1611.01368 | 32 | (b) (d) (e)
(a)
(c)
.
Figure 4: Alternative tasks and additional experiments: (a) overall error rate across tasks (note that the y-axis ends in 10%); (b) effect of count of attractors in homogeneous dependencies across training objectives; (c) comparison of the Google LM (Jozefowicz et al., 2016) to our LM and one of our supervised verb inï¬ection systems, on a sample of sentences; (d) number prediction: effect of count of attractors using SRNs with standard training or LSTM with targeted training; (e) number prediction: difference in error rate between singular and plural subjects across RNN cell types. Error bars represent binomial 95% conï¬dence intervals.
made eight times as many errors as the original num- ber prediction network (6.78% compared to 0.83%), and did substantially worse than the noun-only base- lines (though recall that the noun-only baselines were still explicitly trained to predict verb number). | 1611.01368#32 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01368 | 33 | The differences across the networks are more strik- ing when we focus on dependencies with agreement attractors (Figure 4b). Here, the language model does worse than chance in the most difï¬cult cases, and only slightly better than the noun-only baselines. The worse-than-chance performance suggests that attractors actively confuse the networks rather than cause them to make a random decision. The other models degrade more gracefully with the number of agreement attractors; overall, the grammaticality judgment objective is somewhat more difï¬cult than the number prediction and verb inï¬ection ones. In summary, we conclude that while the LSTM is capa- ble of learning syntax-sensitive agreement dependen- cies under various objectives, the language-modeling objective alone is not sufï¬cient for learning such de- pendencies, and a more direct form of training signal
is required.
Comparison to a large-scale language model: One objection to our language modeling result is that our LM faced a much harder objective than our other modelsâpredicting a distribution over 10,000 vocabulary items is certainly harder than bi- nary classiï¬cationâbut was equipped with the same capacity (50-dimensional hidden state and word vec- tors). Would the performance gap between the LM and the explicitly supervised models close if we in- creased the capacity of the LM? | 1611.01368#33 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01368 | 34 | We address this question using a very large pub- licly available LM (Jozefowicz et al., 2016), which we refer to as the Google LM.12 The Google LM rep- resent the current state-of-the-art in language mod- eling: it is trained on a billion-word corpus (Chelba et al., 2013), with a vocabulary of 800,000 words. It is based on a two-layer LSTM with 8192 units in each layer, or more than 300 times as many units as our LM; at 1.04 billion parameters it has almost
12 https://github.com/tensorflow/models/ tree/master/lm_1b
# subj.
4
2000 times as many parameters. It is a ï¬ne-tuned language model that achieves impressive perplexity scores on common benchmarks, requires a massive infrastructure for training, and pushes the boundaries of whatâs feasible with current hardware. | 1611.01368#34 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01368 | 35 | We tested the Google LM with the methodology we used to test ours.13 Due to computational resource limitations, we did not evaluate it on the entire test set, but sampled a random selection of 500 sentences for each count of attractors (testing a single sentence under the Google LM takes around 5 seconds on average). The results are presented in Figure 4c, where they are compared to the performance of the supervised verb inï¬ection system. Despite having an order of magnitude more parameters and signiï¬cantly larger training data, the Google LM performed poorly compared to the supervised models; even a single attractor led to a sharp increase in error rate to 28.5%, almost as high as our small-scale LM (32.6% on the same sentences). While additional attractors caused milder degradation than in our LM, the performance of the Google LM on sentences with four attractors was still worse than always guessing the majority class (SINGULAR).
In summary, our experiments with the Google LM do not change our conclusions: the contrast between the poor performance of the LMs and the strong per- formance of the explicitly supervised objectives sug- gests that direct supervision has a dramatic effect on the modelâs ability to learn syntax-sensitive de- pendencies. Given that the Google LM was already trained on several hundred times more data than the number prediction system, it appears unlikely that its relatively poor performance was due to lack of training data. | 1611.01368#35 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.