doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
1606.06565
113
[164] Wolfram Wiesemann, Daniel Kuhn, and Ber¸c Rustem. “Robust Markov decision processes”. In: Mathematics of Operations Research 38.1 (2013), pp. 153–183. [165] Roman V Yampolskiy. “Utility function security in artificially intelligent agents”. In: Journal of Experimental & Theoretical Artificial Intelligence 26.3 (2014), pp. 373–389. [166] Jason Yosinski et al. “Understanding neural networks through deep visualization”. In: arXiv preprint arXiv:1506.06579 (2015). [167] Eliezer Yudkowsky. “Artificial intelligence as a positive and negative factor in global risk”. In: Global catastrophic risks 1 (2008), p. 303.
1606.06565#113
Concrete Problems in AI Safety
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function ("avoiding side effects" and "avoiding reward hacking"), an objective function that is too expensive to evaluate frequently ("scalable supervision"), or undesirable behavior during the learning process ("safe exploration" and "distributional shift"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.
http://arxiv.org/pdf/1606.06565
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané
cs.AI, cs.LG
29 pages
null
cs.AI
20160621
20160725
[ { "id": "1507.01986" }, { "id": "1506.02142" }, { "id": "1602.04621" }, { "id": "1602.04450" }, { "id": "1605.09304" }, { "id": "1606.05374" }, { "id": "1604.05288" }, { "id": "1603.00448" }, { "id": "1605.03661" }, { "id": "1512.02595" }, { "id": "1511.08228" }, { "id": "1510.04342" }, { "id": "1604.05280" }, { "id": "1606.04753" }, { "id": "1606.03976" }, { "id": "1602.02697" }, { "id": "1505.05424" }, { "id": "1604.06057" }, { "id": "1605.03143" }, { "id": "1506.06579" }, { "id": "1605.03142" }, { "id": "1502.05698" }, { "id": "1512.05832" }, { "id": "1502.02072" }, { "id": "1506.02438" }, { "id": "1502.02362" } ]
1606.06031
0
6 1 0 2 n u J 0 2 ] L C . s c [ 1 v 1 3 0 6 0 . 6 0 6 1 : v i X r a # The LAMBADA dataset: Word prediction requiring a broad discourse context∗ Denis Paperno, Germ´an Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham†, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, Raquel Fern´andez‡ CIMeC - Center for Mind/Brain Sciences, University of Trento {firstname.lastname}@unitn.it, †[email protected] ‡Institute for Logic, Language & Computation, University of Amsterdam [email protected] # Abstract
1606.06031#0
The LAMBADA dataset: Word prediction requiring a broad discourse context
We introduce LAMBADA, a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. We show that LAMBADA exemplifies a wide range of linguistic phenomena, and that none of several state-of-the-art language models reaches accuracy above 1% on this novel benchmark. We thus propose LAMBADA as a challenging test set, meant to encourage the development of new models capable of genuine understanding of broad context in natural language text.
http://arxiv.org/pdf/1606.06031
Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, Raquel Fernández
cs.CL, cs.AI, cs.LG
10 pages, Accepted as a long paper for ACL 2016
null
cs.CL
20160620
20160620
[]
1606.06031
1
# Abstract We introduce LAMBADA, a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages shar- ing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preced- ing the target word. To succeed on LAM- BADA, computational models cannot sim- ply rely on local context, but must be able to keep track of information in the broader discourse. We show that LAMBADA ex- emplifies a wide range of linguistic phe- nomena, and that none of several state-of- the-art language models reaches accuracy above 1% on this novel benchmark. We thus propose LAMBADA as a challenging test set, meant to encourage the develop- ment of new models capable of genuine understanding of broad context in natural language text. the end-to-end system of Vinyals and Le (2015), trained on large conversational datasets, produces dialogues such as the following: (1) Human: what is your job? Machine: i’m a lawyer Human: what do you do? Machine: i’m a doctor
1606.06031#1
The LAMBADA dataset: Word prediction requiring a broad discourse context
We introduce LAMBADA, a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. We show that LAMBADA exemplifies a wide range of linguistic phenomena, and that none of several state-of-the-art language models reaches accuracy above 1% on this novel benchmark. We thus propose LAMBADA as a challenging test set, meant to encourage the development of new models capable of genuine understanding of broad context in natural language text.
http://arxiv.org/pdf/1606.06031
Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, Raquel Fernández
cs.CL, cs.AI, cs.LG
10 pages, Accepted as a long paper for ACL 2016
null
cs.CL
20160620
20160620
[]
1606.06160
1
# ABSTRACT We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradi- ents. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional lay- ers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1% top-1 accuracy on ImageNet validation set. The DoReFa-Net AlexNet model is released publicly. # INTRODUCTION
1606.06160#1
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients
We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1\% top-1 accuracy on ImageNet validation set. The DoReFa-Net AlexNet model is released publicly.
http://arxiv.org/pdf/1606.06160
Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, Yuheng Zou
cs.NE, cs.LG
null
null
cs.NE
20160620
20180202
[ { "id": "1502.03167" }, { "id": "1605.04711" }, { "id": "1510.03009" }, { "id": "1606.01981" }, { "id": "1602.02830" }, { "id": "1603.05279" }, { "id": "1502.02551" }, { "id": "1601.06071" }, { "id": "1512.06473" }, { "id": "1510.00149" } ]
1606.06031
2
(1) Human: what is your job? Machine: i’m a lawyer Human: what do you do? Machine: i’m a doctor Separately, the system responses are appropriate for the respective questions. However, when taken together, they are incoherent. The system be- haviour is somewhat parrot-like. It can locally produce perfectly sensible language fragments, but it fails to take the meaning of the broader dis- course context into account. Much research ef- fort has consequently focused on designing sys- tems able to keep information from the broader context into memory, and possibly even perform simple forms of reasoning about it (Hermann et al., 2015; Hochreiter and Schmidhuber, 1997; Ji et al., 2015; Mikolov et al., 2015; Sordoni et al., 2015; Sukhbaatar et al., 2015; Wang and Cho, 2015, a.o.). # Introduction
1606.06031#2
The LAMBADA dataset: Word prediction requiring a broad discourse context
We introduce LAMBADA, a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. We show that LAMBADA exemplifies a wide range of linguistic phenomena, and that none of several state-of-the-art language models reaches accuracy above 1% on this novel benchmark. We thus propose LAMBADA as a challenging test set, meant to encourage the development of new models capable of genuine understanding of broad context in natural language text.
http://arxiv.org/pdf/1606.06031
Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, Raquel Fernández
cs.CL, cs.AI, cs.LG
10 pages, Accepted as a long paper for ACL 2016
null
cs.CL
20160620
20160620
[]
1606.06160
2
# INTRODUCTION Recent progress in deep Convolutional Neural Networks (DCNN) has considerably changed the landscape of computer vision (Krizhevsky et al., 2012), speech recognition (Hinton et al., 2012a) and NLP (Bahdanau et al., 2014). However, a state-of-the-art DCNN usually has a lot of parameters and high computational complex- ity, which both impedes its application in embedded devices and slows down the iteration of its research and development. For example, the training process of a DCNN may take up to weeks on a modern multi-GPU server for large datasets like ImageNet (Deng et al., 2009). In light of this, substantial research efforts are invested in speeding up DCNNs at both run-time and training-time, on both general-purpose (Vanhoucke et al., 2011; Gong et al., 2014; Han et al., 2015b) and specialized computer hardware (Farabet et al., 2011; Pham et al., 2012; Chen et al., 2014a;b). Various approaches like quantization (Wu et al., 2015) and sparsification (Han et al., 2015a) have also been proposed.
1606.06160#2
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients
We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1\% top-1 accuracy on ImageNet validation set. The DoReFa-Net AlexNet model is released publicly.
http://arxiv.org/pdf/1606.06160
Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, Yuheng Zou
cs.NE, cs.LG
null
null
cs.NE
20160620
20180202
[ { "id": "1502.03167" }, { "id": "1605.04711" }, { "id": "1510.03009" }, { "id": "1606.01981" }, { "id": "1602.02830" }, { "id": "1603.05279" }, { "id": "1502.02551" }, { "id": "1601.06071" }, { "id": "1512.06473" }, { "id": "1510.00149" } ]
1606.06031
3
# Introduction The recent spurt of powerful end-to-end-trained neural networks for Natural Language Processing (Hermann et al., 2015; Rockt¨aschel et al., 2016; Weston et al., 2015, a.o.) has sparked interest in tasks to measure the progress they are bringing about in genuine language understanding. Spe- cial care must be taken in evaluating such systems, since their effectiveness at picking statistical gen- eralizations from large corpora can lead to the il- lusion that they are reaching a deeper degree of understanding than they really are. For example, ∗ Denis and Germ´an share first authorship. Marco, Gemma, and Raquel share senior authorship. In this paper, we introduce the LAMBADA (LAnguage Modeling Broadened to dataset Account for Discourse Aspects). LAMBADA pro- poses a word prediction task where the target item is difficult to guess (for English speakers) when only the sentence in which it appears is available, but becomes easy when a broader context is pre- sented. Consider Example (1) in Figure 1. The sentence Do you honestly think that I would want you to have a ? has a multitude of possible con- tinuations, but the broad context clearly indicates that the missing word is miscarriage.
1606.06031#3
The LAMBADA dataset: Word prediction requiring a broad discourse context
We introduce LAMBADA, a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. We show that LAMBADA exemplifies a wide range of linguistic phenomena, and that none of several state-of-the-art language models reaches accuracy above 1% on this novel benchmark. We thus propose LAMBADA as a challenging test set, meant to encourage the development of new models capable of genuine understanding of broad context in natural language text.
http://arxiv.org/pdf/1606.06031
Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, Raquel Fernández
cs.CL, cs.AI, cs.LG
10 pages, Accepted as a long paper for ACL 2016
null
cs.CL
20160620
20160620
[]
1606.06160
3
Recent research efforts (Courbariaux et al., 2014; Kim & Smaragdis, 2016; Rastegari et al., 2016; Merolla et al., 2016) have considerably reduced both model size and computation complexity by using low bitwidth weights and low bitwidth activations. In particular, in BNN (Courbariaux & Bengio, 2016) and XNOR-Net (Rastegari et al., 2016), both weights and input activations of convo- lutional layers1 are binarized. Hence during the forward pass the most computationally expensive convolutions can be done by bitwise operation kernels, thanks to the following formula which com- putes the dot product of two bit vectors x and y using bitwise operations, where bitcount counts the number of bits in a bit vector: x · y = bitcount(and(x, y)), xi, yi ∈ {0, 1} ∀i. (1) 1Note fully-connected layers are special cases of convolutional layers. 1 DoReFa-Net # 2
1606.06160#3
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients
We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1\% top-1 accuracy on ImageNet validation set. The DoReFa-Net AlexNet model is released publicly.
http://arxiv.org/pdf/1606.06160
Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, Yuheng Zou
cs.NE, cs.LG
null
null
cs.NE
20160620
20180202
[ { "id": "1502.03167" }, { "id": "1605.04711" }, { "id": "1510.03009" }, { "id": "1606.01981" }, { "id": "1602.02830" }, { "id": "1603.05279" }, { "id": "1502.02551" }, { "id": "1601.06071" }, { "id": "1512.06473" }, { "id": "1510.00149" } ]
1606.06031
4
LAMBADA casts language understanding in the classic word prediction framework of language modeling. We can thus use it to test several ex- isting language modeling architectures, including systems with capacity to hold longer-term contex- In our preliminary experiments, tual memories. none of these models came even remotely close to human performance, confirming that LAMBADA is a challenging benchmark for research on auto- mated models of natural language understanding. # 2 Related datasets
1606.06031#4
The LAMBADA dataset: Word prediction requiring a broad discourse context
We introduce LAMBADA, a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. We show that LAMBADA exemplifies a wide range of linguistic phenomena, and that none of several state-of-the-art language models reaches accuracy above 1% on this novel benchmark. We thus propose LAMBADA as a challenging test set, meant to encourage the development of new models capable of genuine understanding of broad context in natural language text.
http://arxiv.org/pdf/1606.06031
Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, Raquel Fernández
cs.CL, cs.AI, cs.LG
10 pages, Accepted as a long paper for ACL 2016
null
cs.CL
20160620
20160620
[]
1606.06160
4
1Note fully-connected layers are special cases of convolutional layers. 1 DoReFa-Net # 2 However, to the best of our knowledge, no previous work has succeeded in quantizing gradients to numbers with bitwidth less than 8 during the backward pass, while still achieving comparable prediction accuracy. In some previous research (Gupta et al., 2015; Courbariaux et al., 2014), con- volutions involve at least 10-bit numbers. In BNN and XNOR-Net, though weights are binarized, gradients are in full precision, therefore the backward-pass still requires convolution between 1-bit numbers and 32-bit floating-points. The inability to exploit bit convolution during the backward pass means that most training time of BNN and XNOR-Net will be spent in backward pass. This paper makes the following contributions: 1. We generalize the method of binarized neural networks to allow creating DoReFa-Net, a CNN that has arbitrary bitwidth in weights, activations, and gradients. As convolutions dur- ing forward/backward passes can then operate on low bit weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both the forward pass and the backward pass of the training process.
1606.06160#4
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients
We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1\% top-1 accuracy on ImageNet validation set. The DoReFa-Net AlexNet model is released publicly.
http://arxiv.org/pdf/1606.06160
Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, Yuheng Zou
cs.NE, cs.LG
null
null
cs.NE
20160620
20180202
[ { "id": "1502.03167" }, { "id": "1605.04711" }, { "id": "1510.03009" }, { "id": "1606.01981" }, { "id": "1602.02830" }, { "id": "1603.05279" }, { "id": "1502.02551" }, { "id": "1601.06071" }, { "id": "1512.06473" }, { "id": "1510.00149" } ]
1606.06031
5
# 2 Related datasets The CNN/Daily Mail (CNNDM) benchmark re- cently introduced by Hermann et al. (2015) is closely related to LAMBADA. CNNDM includes a large set of online articles that are published to- gether with short summaries of their main points. The task is to guess a named entity that has been removed from one such summary. Although the data are not normed by subjects, it is unlikely that the missing named entity can be guessed from the short summary alone, and thus, like in LAM- BADA, models need to look at the broader con- text (the article). Differences between the two datasets include text genres (news vs. novels; see Section 3.1) and the fact that missing items in CN- NDM are limited to named entities. Most im- portantly, the two datasets require models to per- form different kinds of inferences over broader passages. For CNNDM, models must be able to summarize the articles, in order to make sense of the sentence containing the missing word, whereas in LAMBADA the last sentence is not a summary of the broader passage, but a continuation of the same story. Thus, in order to succeed, models must instead understand what is a plausible devel- opment of a narrative fragment or a dialogue.
1606.06031#5
The LAMBADA dataset: Word prediction requiring a broad discourse context
We introduce LAMBADA, a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. We show that LAMBADA exemplifies a wide range of linguistic phenomena, and that none of several state-of-the-art language models reaches accuracy above 1% on this novel benchmark. We thus propose LAMBADA as a challenging test set, meant to encourage the development of new models capable of genuine understanding of broad context in natural language text.
http://arxiv.org/pdf/1606.06031
Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, Raquel Fernández
cs.CL, cs.AI, cs.LG
10 pages, Accepted as a long paper for ACL 2016
null
cs.CL
20160620
20160620
[]
1606.06160
5
2. As bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate low bitwidth neural network training on these hardware. In particular, with the power efficiency of FPGA and ASIC, we may consider- ably reduce energy consumption of low bitwidth neural network training. 3. We explore the configuration space of bitwidth for weights, activations and gradients for DoReFa-Net. E.g., training a network using 1-bit weights, 1-bit activations and 2-bit gradi- ents can lead to 93% accuracy on SVHN dataset. In our experiments, gradients in general require larger bitwidth than activations, and activations in general require larger bitwidth than weights, to lessen the degradation of prediction accuracy compared to 32-bit precision counterparts. We name our method “DoReFa-Net” to take note of these phenomena. 4. We release in TensorFlow (Abadi et al.) format a DoReFa-Net 3 derived from AlexNet (Krizhevsky et al., 2012) that gets 46.1% in single-crop top-1 accuracy on ILSVRC12 validation set. A reference implementation for training of a DoReFa-net on SVHN dataset is also available.
1606.06160#5
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients
We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1\% top-1 accuracy on ImageNet validation set. The DoReFa-Net AlexNet model is released publicly.
http://arxiv.org/pdf/1606.06160
Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, Yuheng Zou
cs.NE, cs.LG
null
null
cs.NE
20160620
20180202
[ { "id": "1502.03167" }, { "id": "1605.04711" }, { "id": "1510.03009" }, { "id": "1606.01981" }, { "id": "1602.02830" }, { "id": "1603.05279" }, { "id": "1502.02551" }, { "id": "1601.06071" }, { "id": "1512.06473" }, { "id": "1510.00149" } ]
1606.06031
6
Another related benchmark, CBT, has been in- troduced by Hill et al. (2016). Like LAMBADA, CBT is a collection of book excerpts, with one word randomly removed from the last sentence in a sequence of 21 sentences. While there are other design differences, the crucial distinction be- tween CBT and LAMBADA is that the CBT pas- sages were not filtered to be human-guessable in the broader context only. Indeed, according to the post-hoc analysis of a sample of CBT passages re- ported by Hill and colleagues, in a large proportion of cases in which annotators could guess the miss- ing word from the broader context, they could also guess it from the last sentence alone. At the same time, in about one fifth of the cases, the annotators could not guess the word even when the broader context was given. Thus, only a small portion of the CBT passages are really probing the model’s ability to understand the broader context, which is instead the focus of LAMBADA. The idea of a book excerpt completion task was originally introduced in the MSRCC dataset (Zweig and Burges, 2011). However, the latter limited context to single sentences, not attempting to measure broader passage understanding.
1606.06031#6
The LAMBADA dataset: Word prediction requiring a broad discourse context
We introduce LAMBADA, a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. We show that LAMBADA exemplifies a wide range of linguistic phenomena, and that none of several state-of-the-art language models reaches accuracy above 1% on this novel benchmark. We thus propose LAMBADA as a challenging test set, meant to encourage the development of new models capable of genuine understanding of broad context in natural language text.
http://arxiv.org/pdf/1606.06031
Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, Raquel Fernández
cs.CL, cs.AI, cs.LG
10 pages, Accepted as a long paper for ACL 2016
null
cs.CL
20160620
20160620
[]
1606.06160
6
# 2 DOREFA-NET In this section we detail our formulation of DoReFa-Net, a method to train neural network that has low bitwidth weights, activations with low bitwidth parameter gradients. We note that while weights and activations can be deterministically quantized, gradients need to be stochastically quantized. We first outline how to exploit bit convolution kernel in DoReFa-Net and then elaborate the method to quantize weights, activations and gradients to low bitwidth numbers. 2.1 USING BIT CONVOLUTION KERNELS IN LOW BITWIDTH NEURAL NETWORK The 1-bit dot product kernel specified in Eqn. 1 can also be used to compute dot product, and consequently convolution, for low bitwidth fixed-point integers. Assume x is a sequence of M -bit fixed-point integers s.t. x = PM −1 m=0 cm(x)2m and y is a sequence of K-bit fixed-point integers s.t. y = PK−1 k=0 are bit vectors, the dot product of x and 2When x and y are vectors of {−1, 1}, Eqn. 1 has a variant that uses xnor instead:
1606.06160#6
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients
We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1\% top-1 accuracy on ImageNet validation set. The DoReFa-Net AlexNet model is released publicly.
http://arxiv.org/pdf/1606.06160
Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, Yuheng Zou
cs.NE, cs.LG
null
null
cs.NE
20160620
20180202
[ { "id": "1502.03167" }, { "id": "1605.04711" }, { "id": "1510.03009" }, { "id": "1606.01981" }, { "id": "1602.02830" }, { "id": "1603.05279" }, { "id": "1502.02551" }, { "id": "1601.06071" }, { "id": "1512.06473" }, { "id": "1510.00149" } ]
1606.06031
7
text understanding can be tested through other tasks, including entailment detec- tion (Bowman et al., 2015), answering questions about a text (Richardson et al., 2013; Weston et al., 2015) and measuring inter-clause coher- ence (Yin and Sch¨utze, 2015). While different tasks can provide complementary insights into the models’ abilities, we find word prediction par- ticularly attractive because of its naturalness (it’s easy to norm the data with non-expert humans) and simplicity. Models just need to be trained to predict the most likely word given the previ- ous context, following the classic language mod- eling paradigm, which is a much simpler setup than the one required, say, to determine whether two sentences entail each other. Moreover, mod- els can have access to virtually unlimited amounts of training data, as all that is required to train a language model is raw text. On a more general methodological level, word prediction has the po- tential to probe almost any aspect of text under- standing, including but not limited to traditional narrower tasks such as entailment, co-reference resolution or word sense disambiguation. # 3 The LAMBADA dataset # 3.1 Data collection1
1606.06031#7
The LAMBADA dataset: Word prediction requiring a broad discourse context
We introduce LAMBADA, a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. We show that LAMBADA exemplifies a wide range of linguistic phenomena, and that none of several state-of-the-art language models reaches accuracy above 1% on this novel benchmark. We thus propose LAMBADA as a challenging test set, meant to encourage the development of new models capable of genuine understanding of broad context in natural language text.
http://arxiv.org/pdf/1606.06031
Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, Raquel Fernández
cs.CL, cs.AI, cs.LG
10 pages, Accepted as a long paper for ACL 2016
null
cs.CL
20160620
20160620
[]
1606.06160
7
2When x and y are vectors of {−1, 1}, Eqn. 1 has a variant that uses xnor instead: x · y = N − 2 × bitcount(xnor(x, y)), xi, yi ∈ {−1, 1} ∀i. (2) 3The model and supplement materials are available at https://github.com/ppwwyyxx/ tensorpack/tree/master/examples/DoReFa-Net 2 DoReFa-Net y can be computed by bitwise operations as: x · y = M −1 X K−1 X 2m+k bitcount[and(cm(x), ck(y))], m=0 k=0 (3) cm(x)i, ck(y)i ∈ {0, 1} ∀i, m, k. (4) In the above equation, the computation complexity is O(M K), i.e., directly proportional to bitwidth of x and y. 2.2 STRAIGHT-THROUGH ESTIMATOR
1606.06160#7
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients
We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1\% top-1 accuracy on ImageNet validation set. The DoReFa-Net AlexNet model is released publicly.
http://arxiv.org/pdf/1606.06160
Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, Yuheng Zou
cs.NE, cs.LG
null
null
cs.NE
20160620
20180202
[ { "id": "1502.03167" }, { "id": "1605.04711" }, { "id": "1510.03009" }, { "id": "1606.01981" }, { "id": "1602.02830" }, { "id": "1603.05279" }, { "id": "1502.02551" }, { "id": "1601.06071" }, { "id": "1512.06473" }, { "id": "1510.00149" } ]
1606.06031
8
# 3 The LAMBADA dataset # 3.1 Data collection1 LAMBADA consists of passages composed of a context (on average 4.6 sentences) and a target sentence. The context size is the minimum num- ber of complete sentences before the target sen- tence such that they cumulatively contain at least 50 tokens (this size was chosen in a pilot study). The task is to guess the last word of the target sen- tence (the target word). The constraint that the target word be the last word of the sentence, while not necessary for our research goal, makes the task more natural for human subjects. The LAMBADA data come from the Book Cor- pus (Zhu et al., 2015). The fact that it con- tains unpublished novels minimizes the potential 1Further technical details are provided in the Supplemen- tary Material (SM): http://clic.cimec.unitn.it/ lambada/
1606.06031#8
The LAMBADA dataset: Word prediction requiring a broad discourse context
We introduce LAMBADA, a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. We show that LAMBADA exemplifies a wide range of linguistic phenomena, and that none of several state-of-the-art language models reaches accuracy above 1% on this novel benchmark. We thus propose LAMBADA as a challenging test set, meant to encourage the development of new models capable of genuine understanding of broad context in natural language text.
http://arxiv.org/pdf/1606.06031
Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, Raquel Fernández
cs.CL, cs.AI, cs.LG
10 pages, Accepted as a long paper for ACL 2016
null
cs.CL
20160620
20160620
[]
1606.06160
8
2.2 STRAIGHT-THROUGH ESTIMATOR The set of real numbers representable by a low bitwidth number k only has a small ordinality 2k. However, mathematically any continuous function whose range is a small finite set would necessar- ily always have zero gradient with respect to its input. We adopt the “straight-through estimator” (STE) method (Hinton et al., 2012b; Bengio et al., 2013) to circumvent this problem. An STE can be thought of as an operator that has arbitrary forward and backward operations. A simple example is the STE defined for Bernoulli sampling with probability p ∈ [0, 1]: # Forward: q ∼ Bernoulli(p) Backward: ∂c ∂p = ∂c ∂q .
1606.06160#8
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients
We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1\% top-1 accuracy on ImageNet validation set. The DoReFa-Net AlexNet model is released publicly.
http://arxiv.org/pdf/1606.06160
Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, Yuheng Zou
cs.NE, cs.LG
null
null
cs.NE
20160620
20180202
[ { "id": "1502.03167" }, { "id": "1605.04711" }, { "id": "1510.03009" }, { "id": "1606.01981" }, { "id": "1602.02830" }, { "id": "1603.05279" }, { "id": "1502.02551" }, { "id": "1601.06071" }, { "id": "1512.06473" }, { "id": "1510.00149" } ]
1606.06031
9
1Further technical details are provided in the Supplemen- tary Material (SM): http://clic.cimec.unitn.it/ lambada/ (1) Context: “Yes, I thought I was going to lose the baby.” “I was scared too,” he stated, sincerity flooding his eyes. “You were ?” “Yes, of course. Why do you even ask?” “This baby wasn’t exactly planned for.” Target sentence: “Do you honestly think that I would want you to have a Target word: miscarriage ?” (2) Context: “Why?” “I would have thought you’d find him rather dry,” she said. “I don’t know about that,” said Gabriel. “He was a great craftsman,” said Heather. “That he was,” said Flannery. Target sentence: “And Polish, to boot,” said Target word: Gabriel .
1606.06031#9
The LAMBADA dataset: Word prediction requiring a broad discourse context
We introduce LAMBADA, a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. We show that LAMBADA exemplifies a wide range of linguistic phenomena, and that none of several state-of-the-art language models reaches accuracy above 1% on this novel benchmark. We thus propose LAMBADA as a challenging test set, meant to encourage the development of new models capable of genuine understanding of broad context in natural language text.
http://arxiv.org/pdf/1606.06031
Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, Raquel Fernández
cs.CL, cs.AI, cs.LG
10 pages, Accepted as a long paper for ACL 2016
null
cs.CL
20160620
20160620
[]
1606.06160
9
# Forward: q ∼ Bernoulli(p) Backward: ∂c ∂p = ∂c ∂q . Here c denotes the objective function. As sampling from a Bernoulli distribution is not a differen- tiable function, “ ∂q ∂p ” is not well defined, hence the backward pass cannot be directly constructed from the forward pass using chain rule. Nevertheless, because q is on expectation equal to p, we ∂q as an approximation for ∂c may use the well-defined gradient ∂c ∂p and construct a STE as above. In other words, STE construction gives a custom-defined “ ∂q An STE we will use extensively in this work is quantizek that quantizes a real number input ri ∈ [0, 1] to a k-bit number output ro ∈ [0, 1]. This STE is defined as below: 1 2k − 1 ∂c ∂ro Forward: ro = round((2k − 1)ri) (5) Backward: ∂c ∂ri = . (6)
1606.06160#9
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients
We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1\% top-1 accuracy on ImageNet validation set. The DoReFa-Net AlexNet model is released publicly.
http://arxiv.org/pdf/1606.06160
Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, Yuheng Zou
cs.NE, cs.LG
null
null
cs.NE
20160620
20180202
[ { "id": "1502.03167" }, { "id": "1605.04711" }, { "id": "1510.03009" }, { "id": "1606.01981" }, { "id": "1602.02830" }, { "id": "1603.05279" }, { "id": "1502.02551" }, { "id": "1601.06071" }, { "id": "1512.06473" }, { "id": "1510.00149" } ]
1606.06031
10
Context: Preston had been the last person to wear those chains, and I knew what I’d see and feel if they were slipped onto my skin-the Reaper’s unending hatred of me. I’d felt enough of that emotion already in the amphitheater. I didn’t want to feel anymore. “Don’t put those on me,” I whispered. “Please.” Target sentence: Sergei looked at me, surprised by my low, raspy please, but he put down the Target word: chains . (3) Context: They tuned, discussed for a moment, then struck up a lively jig. Everyone joined in, turning the courtyard into an even more chaotic scene, people now dancing in circles, swinging and spinning in circles, everyone making up their own dance steps. I felt my feet tapping, my body wanting to move. Target sentence: Aside from writing, I ’ve always loved Target word: dancing . (4) Context: He shook his head, took a step back and held his hands up as he tried to smile without losing a cigarette. “Yes you can,” Julia said in a reassuring voice. “I ’ve already focused on my friend. You just have to click the shutter, on top, here.” (5) Target sentence: He nodded sheepishly, through his cigarette away and took the Target word: camera
1606.06031#10
The LAMBADA dataset: Word prediction requiring a broad discourse context
We introduce LAMBADA, a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. We show that LAMBADA exemplifies a wide range of linguistic phenomena, and that none of several state-of-the-art language models reaches accuracy above 1% on this novel benchmark. We thus propose LAMBADA as a challenging test set, meant to encourage the development of new models capable of genuine understanding of broad context in natural language text.
http://arxiv.org/pdf/1606.06031
Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, Raquel Fernández
cs.CL, cs.AI, cs.LG
10 pages, Accepted as a long paper for ACL 2016
null
cs.CL
20160620
20160620
[]
1606.06160
10
Forward: ro = round((2k − 1)ri) (5) Backward: ∂c ∂ri = . (6) It is obvious by construction that the output q of quantizek STE is a real number representable by k bits. Also, since ro is a k-bit fixed-point integer, the dot product of two sequences of such k-bit real numbers can be efficiently calculated, by using fixed-point integer dot product in Eqn. 3 followed by proper scaling. # 2.3 LOW BITWIDTH QUANTIZATION OF WEIGHTS In this section we detail our approach to getting low bitwidth weights. In previous works, STE has been used to binarize the weights. For example in BNN, weights are binarized by the following STE: Forward: ro = sign(ri) ∂c ∂ri Here sign(ri) = 2Iri≥0 − 1 returns one of two possible values: {−1, 1}. In XNOR-Net, weights are binarized by the following STE, with the difference being that weights are scaled after binarized: Forward: ro = sign(ri) × EF (|ri|) Backward: ∂c ∂ri = ∂c ∂ro . 3 DoReFa-Net
1606.06160#10
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients
We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1\% top-1 accuracy on ImageNet validation set. The DoReFa-Net AlexNet model is released publicly.
http://arxiv.org/pdf/1606.06160
Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, Yuheng Zou
cs.NE, cs.LG
null
null
cs.NE
20160620
20180202
[ { "id": "1502.03167" }, { "id": "1605.04711" }, { "id": "1510.03009" }, { "id": "1606.01981" }, { "id": "1602.02830" }, { "id": "1603.05279" }, { "id": "1502.02551" }, { "id": "1601.06071" }, { "id": "1512.06473" }, { "id": "1510.00149" } ]
1606.06031
11
(5) Target sentence: He nodded sheepishly, through his cigarette away and took the Target word: camera (6) Context: In my palm is a clear stone, and inside it is a small ivory statuette. A guardian angel. “Figured if you’re going to be out at night getting hit by cars, you might as well have some backup.” I look at him, feeling stunned. Like this is some sort of sign. Target sentence: But as I stare at Harlin, his mouth curved in a confident grin, I don’t care about Target word: signs Context: Both its sun-speckled shade and the cool grass beneath were a welcome respite after the stifling kitchen, and I was glad to relax against the tree’s rough, brittle bark and begin my breakfast of buttery, toasted bread and fresh fruit. Even the water was tasty, it was so clean and cold. Target sentence: It almost made up for the lack of Target word: coffee . (7) Context: My wife refused to allow me to come to Hong Kong when the plague was at its height and –” “Your wife, Johanne? You are married at last ?” Johanne grinned. “Well, when a man gets to my age, he starts to need a few home comforts. (8)
1606.06031#11
The LAMBADA dataset: Word prediction requiring a broad discourse context
We introduce LAMBADA, a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. We show that LAMBADA exemplifies a wide range of linguistic phenomena, and that none of several state-of-the-art language models reaches accuracy above 1% on this novel benchmark. We thus propose LAMBADA as a challenging test set, meant to encourage the development of new models capable of genuine understanding of broad context in natural language text.
http://arxiv.org/pdf/1606.06031
Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, Raquel Fernández
cs.CL, cs.AI, cs.LG
10 pages, Accepted as a long paper for ACL 2016
null
cs.CL
20160620
20160620
[]
1606.06160
11
Forward: ro = sign(ri) × EF (|ri|) Backward: ∂c ∂ri = ∂c ∂ro . 3 DoReFa-Net In XNOR-Net, the scaling factor EF (|ri|) is the mean of absolute value of each output channel of weights. The rationale is that introducing this scaling factor will increase the value range of weights, while still being able to exploit bit convolution kernels. However, the channel-wise scaling factors will make it impossible to exploit bit convolution kernels when computing the convolution between gradients and the weights during back propagation. Hence, in our experiments, we use a constant scalar to scale all filters instead of doing channel-wise scaling. We use the following STE for all neural networks that have binary weights in this paper: Forward: ro = sign(ri) × E(|ri|) (7) Backward: ∂c ∂ri = ∂c ∂ro . (8) In case we use k-bit representation of the weights with k > 1, we apply the STE f k follows: Forward: ro = f k ω(ri) = 2 quantizek( tanh(ri) 2 max(| tanh(ri)|) + 1 2 ) − 1. (9)
1606.06160#11
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients
We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1\% top-1 accuracy on ImageNet validation set. The DoReFa-Net AlexNet model is released publicly.
http://arxiv.org/pdf/1606.06160
Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, Yuheng Zou
cs.NE, cs.LG
null
null
cs.NE
20160620
20180202
[ { "id": "1502.03167" }, { "id": "1605.04711" }, { "id": "1510.03009" }, { "id": "1606.01981" }, { "id": "1602.02830" }, { "id": "1603.05279" }, { "id": "1502.02551" }, { "id": "1601.06071" }, { "id": "1512.06473" }, { "id": "1510.00149" } ]
1606.06031
12
(8) Target sentence: After my dear mother passed away ten years ago now, I became Target word: lonely (9) Context: “Again, he left that up to you. However, he was adamant in his desire that it remain a private ceremony. He asked me to make sure, for instance, that no information be given to the newspaper regarding his death, not even an obituary. Target sentence: I got the sense that he didn’t want anyone, aside from the three of us, to know that he’d even Target word: died (10) Context: The battery on Logan’s radio must have been on the way out. So he told himself. There was no other explanation beyond Cygan and the staff at the White House having been overrun. Lizzie opened her eyes with a flutter. They had been on the icy road for an hour without incident. Target sentence: Jack was happy to do all of the Target word: driving . _____. Figure 1: Examples of LAMBADA passages. Underlined words highlight when the target word (or its lemma) occurs in the context.
1606.06031#12
The LAMBADA dataset: Word prediction requiring a broad discourse context
We introduce LAMBADA, a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. We show that LAMBADA exemplifies a wide range of linguistic phenomena, and that none of several state-of-the-art language models reaches accuracy above 1% on this novel benchmark. We thus propose LAMBADA as a challenging test set, meant to encourage the development of new models capable of genuine understanding of broad context in natural language text.
http://arxiv.org/pdf/1606.06031
Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, Raquel Fernández
cs.CL, cs.AI, cs.LG
10 pages, Accepted as a long paper for ACL 2016
null
cs.CL
20160620
20160620
[]
1606.06160
12
Backward: ∂c ∂ri = ∂ro ∂ri ∂c ∂ro 4 (10) Note here we use tanh to limit the value range of weights to [−1, 1] before quantizing to k-bit. By 2 is a number in [0, 1], where the maximum is taken over all weights construction, in that layer. quantizek will then quantize this number to k-bit fixed-point ranging in [0, 1]. Finally an affine transform will bring the range of f k Note that when k = 1, Eqn. 9 is different from Eqn. 7, providing a different way of binarizing weights. Nevertheless, we find this difference insignificant in experiments. 2.4 LOW BITWIDTH QUANTIZATION OF ACTIVATIONS Next we detail our approach to getting low bitwidth activations that are input to convolutions, which is of critical importance in replacing floating-point convolutions by less computation-intensive bit convolutions.
1606.06160#12
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients
We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1\% top-1 accuracy on ImageNet validation set. The DoReFa-Net AlexNet model is released publicly.
http://arxiv.org/pdf/1606.06160
Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, Yuheng Zou
cs.NE, cs.LG
null
null
cs.NE
20160620
20180202
[ { "id": "1502.03167" }, { "id": "1605.04711" }, { "id": "1510.03009" }, { "id": "1606.01981" }, { "id": "1602.02830" }, { "id": "1603.05279" }, { "id": "1502.02551" }, { "id": "1601.06071" }, { "id": "1512.06473" }, { "id": "1510.00149" } ]
1606.06031
13
. _____. Figure 1: Examples of LAMBADA passages. Underlined words highlight when the target word (or its lemma) occurs in the context. usefulness of general world knowledge and ex- ternal resources for the task, in contrast to other kinds of texts like news data, Wikipedia text, or famous novels. The corpus, after duplicate re- moval and filtering out of potentially offensive ma- terial with a stop word list, contains 5,325 nov- els and 465 million words. We randomly divided the novels into equally-sized training and devel- opment+testing partitions. We built the LAM- BADA dataset from the latter, with the idea that models tackling LAMBADA should be trained on raw text from the training partition, composed of 2662 novels and encompassing more than 200M words. Because novels are pre-assigned to one of the two partitions only, LAMBADA passages are self-contained and cannot be solved by exploiting the knowledge in the remainder of the novels, for example background information about the char- acters involved or the properties of the fictional world in a given novel. The same novel-based di- vision method is used to further split LAMBADA data between development and testing.
1606.06031#13
The LAMBADA dataset: Word prediction requiring a broad discourse context
We introduce LAMBADA, a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. We show that LAMBADA exemplifies a wide range of linguistic phenomena, and that none of several state-of-the-art language models reaches accuracy above 1% on this novel benchmark. We thus propose LAMBADA as a challenging test set, meant to encourage the development of new models capable of genuine understanding of broad context in natural language text.
http://arxiv.org/pdf/1606.06031
Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, Raquel Fernández
cs.CL, cs.AI, cs.LG
10 pages, Accepted as a long paper for ACL 2016
null
cs.CL
20160620
20160620
[]
1606.06160
13
In BNN and XNOR-Net, activations are binarized in the same way as weights. However, we fail to reproduce the results of XNOR-Net if we follow their methods of binarizing activations, and the binarizing approach in BNN is claimed by (Rastegari et al., 2016) to cause severe prediction accuracy degradation when applied on ImageNet models like AlexNet. Hence instead, we apply an STE on input activations r of each weight layer. Here we assume the output of the previous layer has passed through a bounded activation function h, which ensures r ∈ [0, 1]. In DoReFa-Net, quantization of activations r to k-bit is simply: # f k α(r) = quantizek(r). (11) 2.5 LOW BITWIDTH QUANTIZATION OF GRADIENTS We have demonstrated deterministic quantization to produce low bitwidth weights and activations. However, we find stochastic quantization is necessary for low bitwidth gradients to be effective. This is in agreement with experiments of (Gupta et al., 2015) on 16-bit weights and 16-bit gradients.
1606.06160#13
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients
We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1\% top-1 accuracy on ImageNet validation set. The DoReFa-Net AlexNet model is released publicly.
http://arxiv.org/pdf/1606.06160
Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, Yuheng Zou
cs.NE, cs.LG
null
null
cs.NE
20160620
20180202
[ { "id": "1502.03167" }, { "id": "1605.04711" }, { "id": "1510.03009" }, { "id": "1606.01981" }, { "id": "1602.02830" }, { "id": "1603.05279" }, { "id": "1502.02551" }, { "id": "1601.06071" }, { "id": "1512.06473" }, { "id": "1510.00149" } ]
1606.06031
14
To reduce time and cost of dataset collection, we filtered out passages that are relatively easy for standard language models, since such cases are likely to be guessable based on local context alone. We used a combination of four language models, chosen by availability and/or ease of train- ing: a pre-trained recurrent neural network (RNN) (Mikolov et al., 2011) and three models trained on the Book Corpus (a standard 4-gram model, a RNN and a feed-forward model; see SM for de- tails, and note that these are different from the models we evaluated on LAMBADA as described in Section 4 below). Any passage whose target word had probability ≥0.00175 according to any of the language models was excluded. A random sample of the remaining passages were then evaluated by human subjects through the CrowdFlower crowdsourcing service2 in three steps. For a given passage, 1. one human subject guessed the target word based on the whole passage (comprising the context and the target sentence); if the guess was right, 2. a second subject guessed the target word based on the whole passage; if that guess was also right, 3. more subjects tried to guess the target word based on the target sentence only, until the 2http://www.crowdflower.com
1606.06031#14
The LAMBADA dataset: Word prediction requiring a broad discourse context
We introduce LAMBADA, a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. We show that LAMBADA exemplifies a wide range of linguistic phenomena, and that none of several state-of-the-art language models reaches accuracy above 1% on this novel benchmark. We thus propose LAMBADA as a challenging test set, meant to encourage the development of new models capable of genuine understanding of broad context in natural language text.
http://arxiv.org/pdf/1606.06031
Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, Raquel Fernández
cs.CL, cs.AI, cs.LG
10 pages, Accepted as a long paper for ACL 2016
null
cs.CL
20160620
20160620
[]
1606.06160
14
To quantize gradients to low bitwidth, it is important to note that gradients are unbounded and may have significantly larger value range than activations. Recall in Eqn. 11, we can map the range of activations to [0, 1] by passing values through differentiable nonlinear functions. However, this kind of construction does not exist for gradients. Therefore we designed the following function for k-bit quantization of gradients: dr 4 1) 1 2 fi (dr) = 2max(({dr|) quantizes (dr) 5 # 4Here ∂ro ∂ri is well-defined because we already defined quantizek as an STE 4 DoReFa-Net Here dr = ∂c ∂r is the back-propagated gradient of the output r of some layer, and the maximum is taken over all axis of the gradient tensor dr except for the mini-batch axis (therefore each instance in a mini-batch will have its own scaling factor). The above function first applies an affine transform on the gradient, to map it into [0, 1], and then inverts the transform after quantization.
1606.06160#14
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients
We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1\% top-1 accuracy on ImageNet validation set. The DoReFa-Net AlexNet model is released publicly.
http://arxiv.org/pdf/1606.06160
Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, Yuheng Zou
cs.NE, cs.LG
null
null
cs.NE
20160620
20180202
[ { "id": "1502.03167" }, { "id": "1605.04711" }, { "id": "1510.03009" }, { "id": "1606.01981" }, { "id": "1602.02830" }, { "id": "1603.05279" }, { "id": "1502.02551" }, { "id": "1601.06071" }, { "id": "1512.06473" }, { "id": "1510.00149" } ]
1606.06031
15
3. more subjects tried to guess the target word based on the target sentence only, until the 2http://www.crowdflower.com word was guessed or the number of unsuc- cessful guesses reached 10; if no subject was able to guess the target word, the passage was added to the LAMBADA dataset. The subjects in step 3 were allowed 3 guesses per sentence, to maximize the chances of catch- ing cases where the target words were guessable from the sentence alone. Step 2 was added based on a pilot study that revealed that, while step 3 was enough to ensure that the data could not be guessed with the local context only, step 1 alone did not ensure that the data were easy given the discourse context (its output includes a mix of cases ranging from obvious to relatively difficult, guessed by an especially able or lucky step-1 sub- ject). We made sure that it was not possible for the same subject to judge the same item in both passage and sentence conditions (details in SM). In the crowdsourcing pipeline, 84–86% items were discarded at step 1, an additional 6–7% at step 2 and another 3–5% at step 3. Only about one in 25 input examples passed all the selection steps.
1606.06031#15
The LAMBADA dataset: Word prediction requiring a broad discourse context
We introduce LAMBADA, a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. We show that LAMBADA exemplifies a wide range of linguistic phenomena, and that none of several state-of-the-art language models reaches accuracy above 1% on this novel benchmark. We thus propose LAMBADA as a challenging test set, meant to encourage the development of new models capable of genuine understanding of broad context in natural language text.
http://arxiv.org/pdf/1606.06031
Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, Raquel Fernández
cs.CL, cs.AI, cs.LG
10 pages, Accepted as a long paper for ACL 2016
null
cs.CL
20160620
20160620
[]
1606.06160
15
To further compensate the potential bias introduced by gradient quantization, we introduce an extra noise function N (k) = σ 2k−1 where σ ∼ U nif orm(−0.5, 0.5). 5 The noise therefore has the same magnitude as the possible quantization error. We find that the artificial noise to be critical for achieving good performance. Finally, the expression we’ll use to quantize gradients to k-bit numbers is as follows: ad 2maxg(|dr|) | 2 fi (dr) = 2maxo(|dr|) | quantize, [ + N(k)| »)| . (12) The quantization of gradient is done on the backward pass only. Hence we apply the following STE on the output of each convolution layer: Forward: ro = ri (13) Backward: ∂c ∂ri = f k γ ( ∂c ∂ro ). (14)
1606.06160#15
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients
We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1\% top-1 accuracy on ImageNet validation set. The DoReFa-Net AlexNet model is released publicly.
http://arxiv.org/pdf/1606.06160
Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, Yuheng Zou
cs.NE, cs.LG
null
null
cs.NE
20160620
20180202
[ { "id": "1502.03167" }, { "id": "1605.04711" }, { "id": "1510.03009" }, { "id": "1606.01981" }, { "id": "1602.02830" }, { "id": "1603.05279" }, { "id": "1502.02551" }, { "id": "1601.06071" }, { "id": "1512.06473" }, { "id": "1510.00149" } ]
1606.06031
16
Subjects were paid $0.22 per page in steps 1 and 2 (with 10 passages per page) and $0.15 per page in step 3 (with 20 sentences per page). Over- all, each item in the resulting dataset costed $1.24 on average. Alternative designs, such as having step 3 before step 2 or before step 1, were found to be more expensive. Cost considerations also precluded us from using more subjects at stage 1, which could in principle improve the quality of fil- tering at this step.
1606.06031#16
The LAMBADA dataset: Word prediction requiring a broad discourse context
We introduce LAMBADA, a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. We show that LAMBADA exemplifies a wide range of linguistic phenomena, and that none of several state-of-the-art language models reaches accuracy above 1% on this novel benchmark. We thus propose LAMBADA as a challenging test set, meant to encourage the development of new models capable of genuine understanding of broad context in natural language text.
http://arxiv.org/pdf/1606.06031
Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, Raquel Fernández
cs.CL, cs.AI, cs.LG
10 pages, Accepted as a long paper for ACL 2016
null
cs.CL
20160620
20160620
[]
1606.06160
16
Forward: ro = ri (13) Backward: ∂c ∂ri = f k γ ( ∂c ∂ro ). (14) Algorithm 1 Training a L-layer DoReFa-Net with W -bit weights and A-bit activations using G-bit gradients. Weights, activations and gradients are quantized according to Eqn. 9, Eqn. 11, Eqn. 12, respectively. Require: a minibatch of inputs and targets (a0, a∗), previous weights W , learning rate η Ensure: updated weights W t+1
1606.06160#16
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients
We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1\% top-1 accuracy on ImageNet validation set. The DoReFa-Net AlexNet model is released publicly.
http://arxiv.org/pdf/1606.06160
Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, Yuheng Zou
cs.NE, cs.LG
null
null
cs.NE
20160620
20180202
[ { "id": "1502.03167" }, { "id": "1605.04711" }, { "id": "1510.03009" }, { "id": "1606.01981" }, { "id": "1602.02830" }, { "id": "1603.05279" }, { "id": "1502.02551" }, { "id": "1601.06071" }, { "id": "1512.06473" }, { "id": "1510.00149" } ]
1606.06031
17
Note that the criteria for passage inclusion were very strict: We required two consecutive subjects to exactly match the missing word, and we made sure that no subject (out of ten) was able to provide it based on local context only, even when given 3 guesses. An alternative to this perfect-match ap- proach would have been to include passages where broad-context subjects provided other plausible or synonymous continuations. However, it is very challenging, both practically and methodologi- cally, to determine which answers other than the original fit the passage well, especially when the goal is to distinguish between items that are solv- able in broad-discourse context and those where the local context is enough. Theoretically, substi- tutability in context could be tested with manual annotation by multiple additional raters, but this would not be financially or practically feasible for a dataset of this scale (human annotators received over 200,000 passages at stage 1). For this reason we went for the strict hit-or-miss approach, keep- ing only items that can be unambiguously deter- mined by human subjects. # 3.2 Dataset statistics
1606.06031#17
The LAMBADA dataset: Word prediction requiring a broad discourse context
We introduce LAMBADA, a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. We show that LAMBADA exemplifies a wide range of linguistic phenomena, and that none of several state-of-the-art language models reaches accuracy above 1% on this novel benchmark. We thus propose LAMBADA as a challenging test set, meant to encourage the development of new models capable of genuine understanding of broad context in natural language text.
http://arxiv.org/pdf/1606.06031
Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, Raquel Fernández
cs.CL, cs.AI, cs.LG
10 pages, Accepted as a long paper for ACL 2016
null
cs.CL
20160620
20160620
[]
1606.06160
17
{1. Computing the parameter gradients:} {1.1 Forward propagation:} 1: for k = 1 to L do 2: W b 3: 4: 5: 6: 7: 8: 9: end for k ← f W ω (Wk) ˜ak ← forward(ab ak ← h(˜ak) if k < L then k ← f A ab end if Optionally apply pooling k−1, W b k ) α (ak) {1.2 Backward propagation:} Compute gaL = ∂C ∂aL knowing aL and a∗. 10: for k = L to 1 do 11: 12: 13: 14: 15: 16: end for Back-propagate gak through activation function h gb ak gak−1 ← backward input(gb ak ← backward weight(gb gW b ak Back-propagate gradients through pooling layer if there is one ← f G γ (gak ) , W b k ) , ab k−1) k {2. Accumulating the parameters gradients:} 17: for k = 1 to L do 18: 19: W t+1 20: end for ∂W b k ∂Wk gWk = gW b k k ← U pdate(Wk, gWk , η)
1606.06160#17
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients
We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1\% top-1 accuracy on ImageNet validation set. The DoReFa-Net AlexNet model is released publicly.
http://arxiv.org/pdf/1606.06160
Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, Yuheng Zou
cs.NE, cs.LG
null
null
cs.NE
20160620
20180202
[ { "id": "1502.03167" }, { "id": "1605.04711" }, { "id": "1510.03009" }, { "id": "1606.01981" }, { "id": "1602.02830" }, { "id": "1603.05279" }, { "id": "1502.02551" }, { "id": "1601.06071" }, { "id": "1512.06473" }, { "id": "1510.00149" } ]
1606.06031
18
# 3.2 Dataset statistics The LAMBADA dataset consists of 10,022 pas- sages, divided into 4,869 development and 5,153 test passages (extracted from 1,331 and 1,332 dis- joint novels, respectively). The average passage consists of 4.6 sentences in the context plus 1 tar- get sentence, for a total length of 75.4 tokens (dev) / 75 tokens (test). Examples of passages in the dataset are given in Figure 1.
1606.06031#18
The LAMBADA dataset: Word prediction requiring a broad discourse context
We introduce LAMBADA, a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. We show that LAMBADA exemplifies a wide range of linguistic phenomena, and that none of several state-of-the-art language models reaches accuracy above 1% on this novel benchmark. We thus propose LAMBADA as a challenging test set, meant to encourage the development of new models capable of genuine understanding of broad context in natural language text.
http://arxiv.org/pdf/1606.06031
Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, Raquel Fernández
cs.CL, cs.AI, cs.LG
10 pages, Accepted as a long paper for ACL 2016
null
cs.CL
20160620
20160620
[]
1606.06160
18
5Note here we do not need clip value of N (k) as the two end points of a uniform distribution are almost surely never attained. 5 DoReFa-Net 2.6 THE ALGORITHM FOR DOREFA-NET We give a sample training algorithm of DoReFa-Net as Algorithm 1. W.l.o.g., the network is assumed to have a feed-forward linear topology, and details like batch normalization and pool- ing layers are omitted. Note that all the expensive operations forward, backward input, backward weight, in convolutional as well as fully-connected layers, are now operating on low bitwidth numbers. By construction, there is always an affine mapping between these low bitwidth numbers and fixed-point integers. As a result, all the expensive operations can be accelerated signif- icantly by the fixed-point integer dot product kernel (Eqn. 3). 2.7 FIRST AND THE LAST LAYER
1606.06160#18
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients
We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1\% top-1 accuracy on ImageNet validation set. The DoReFa-Net AlexNet model is released publicly.
http://arxiv.org/pdf/1606.06160
Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, Yuheng Zou
cs.NE, cs.LG
null
null
cs.NE
20160620
20180202
[ { "id": "1502.03167" }, { "id": "1605.04711" }, { "id": "1510.03009" }, { "id": "1606.01981" }, { "id": "1602.02830" }, { "id": "1603.05279" }, { "id": "1502.02551" }, { "id": "1601.06071" }, { "id": "1512.06473" }, { "id": "1510.00149" } ]
1606.06031
19
The training data for language models to be tested on LAMBADA include the full text of 2,662 novels (disjoint from those in dev+test), compris- ing 203 million words. Note that the training data consists of text from the same domain as the dev+test passages, in large amounts but not fil- tered in the same way. This is partially motivated by economic considerations (recall that each data point costs $1.24 on average), but, more impor- tantly, it is justified by the intended use of LAM- BADA as a tool to evaluate general-purpose mod- els in terms of how they fare on broad-context un- derstanding (just like our subjects could predict the missing words using their more general text understanding abilities), not as a resource to de- velop ad-hoc models only meant to predict the fi- nal word in the sort of passages encountered in LAMBADA. The development data can be used to fine-tune models to the specifics of the LAM- BADA passages. # 3.3 Dataset analysis
1606.06031#19
The LAMBADA dataset: Word prediction requiring a broad discourse context
We introduce LAMBADA, a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. We show that LAMBADA exemplifies a wide range of linguistic phenomena, and that none of several state-of-the-art language models reaches accuracy above 1% on this novel benchmark. We thus propose LAMBADA as a challenging test set, meant to encourage the development of new models capable of genuine understanding of broad context in natural language text.
http://arxiv.org/pdf/1606.06031
Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, Raquel Fernández
cs.CL, cs.AI, cs.LG
10 pages, Accepted as a long paper for ACL 2016
null
cs.CL
20160620
20160620
[]
1606.06160
19
2.7 FIRST AND THE LAST LAYER Among all layers in a DCNN, the first and the last layers appear to be different from the rest, as they are interfacing the input and output of the network. For the first layer, the input is often an image, which may contain 8-bit features. On the other hand, the output layer typically produce approximately one-hot vectors, which are close to bit vectors by definition. It is an interesting question whether these differences would cause the first and last layer to exhibit different behavior when converted to low bitwidth counterparts. In the related work of (Han et al., 2015b) which converts network weights to sparse tensors, in- troducing the same ratio of zeros in the first convolutional layer is found to cause more prediction accuracy degradation than in the other convolutional layers. Based on this intuition as well as the observation that the inputs to the first layer often contain only a few channels and constitutes a small proportion of total computation complexity, we perform most of our experiments by not quantizing the weights of the first convolutional layer, unless noted otherwise. Nevertheless, the outputs of the first convolutional layer are quantized to low bitwidth as they would be used by the consequent convolutional layer.
1606.06160#19
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients
We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1\% top-1 accuracy on ImageNet validation set. The DoReFa-Net AlexNet model is released publicly.
http://arxiv.org/pdf/1606.06160
Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, Yuheng Zou
cs.NE, cs.LG
null
null
cs.NE
20160620
20180202
[ { "id": "1502.03167" }, { "id": "1605.04711" }, { "id": "1510.03009" }, { "id": "1606.01981" }, { "id": "1602.02830" }, { "id": "1603.05279" }, { "id": "1502.02551" }, { "id": "1601.06071" }, { "id": "1512.06473" }, { "id": "1510.00149" } ]
1606.06031
20
# 3.3 Dataset analysis Our analysis of the LAMBADA data suggests that, in order for the target word to be predictable in a broad context only, it must be strongly cued in the broader discourse. Indeed, it is typical for LAM- BADA items that the target word (or its lemma) occurs in the context. Figure 2(a) compares the LAMBADA items to a random 5000-item sam- ple from the input data, that is, the passages that were presented to human subjects in the filtering phase (we sampled from all passages passing the automated filters described in Section 3.1 above, including those that made it to LAMBADA). The figure shows that when subjects guessed the word (only) in the broad context, often the word it- self occurred in the context: More than 80% of LAMBADA passages include the target word in the context, while in the input data that was the case for less than 15% of the passages. To guess the right word, however, subjects must still put their linguistic and general cognitive skills to good use, as shown by the examples featuring the target word in the context reported in Figure 1.
1606.06031#20
The LAMBADA dataset: Word prediction requiring a broad discourse context
We introduce LAMBADA, a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. We show that LAMBADA exemplifies a wide range of linguistic phenomena, and that none of several state-of-the-art language models reaches accuracy above 1% on this novel benchmark. We thus propose LAMBADA as a challenging test set, meant to encourage the development of new models capable of genuine understanding of broad context in natural language text.
http://arxiv.org/pdf/1606.06031
Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, Raquel Fernández
cs.CL, cs.AI, cs.LG
10 pages, Accepted as a long paper for ACL 2016
null
cs.CL
20160620
20160620
[]
1606.06160
20
Similarly, when the output number of class is small, to stay away from potential degradation of pre- diction accuracy, we leave the last fully-connected layer intact unless noted otherwise. Nevertheless, the gradients back-propagated from the final FC layer are properly quantized. We will give the empirical evidence in Section 3.3. 2.8 REDUCING RUN-TIME MEMORY FOOTPRINT BY FUSING NONLINEAR FUNCTION AND ROUNDING One of the motivations for creating low bitwidth neural network is to save run-time memory footprint in inference. A naive implementation of Algorithm 1 would store activations h(ak) in full-precision numbers, consuming much memory during run-time. In particular, if h involves floating-point arith- metics, there will be non-negligible amount of non-bitwise operations related to computations of h(ak).
1606.06160#20
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients
We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1\% top-1 accuracy on ImageNet validation set. The DoReFa-Net AlexNet model is released publicly.
http://arxiv.org/pdf/1606.06160
Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, Yuheng Zou
cs.NE, cs.LG
null
null
cs.NE
20160620
20180202
[ { "id": "1502.03167" }, { "id": "1605.04711" }, { "id": "1510.03009" }, { "id": "1606.01981" }, { "id": "1602.02830" }, { "id": "1603.05279" }, { "id": "1502.02551" }, { "id": "1601.06071" }, { "id": "1512.06473" }, { "id": "1510.00149" } ]
1606.06031
21
Figure 2(b) shows that most target words in LAMBADA are proper nouns (48%), followed by common nouns (37%) and, at a distance, verbs (7.7%). In fact, proper nouns are hugely over- represented in LAMBADA, while the other cat- egories are under-represented, compared to the POS distribution in the input. A variety of factors converges in making proper nouns easy for sub- jects in the LAMBADA task. In particular, when the context clearly demands a referential expres- sion, the constraint that the blank be filled by a single word excludes other possibilities such as noun phrases with articles, and there are reasons to suspect that co-reference is easier than other dis- course phenomena in our task (see below). How- ever, although co-reference seems to play a big role, only 0.3% of target words are pronouns.
1606.06031#21
The LAMBADA dataset: Word prediction requiring a broad discourse context
We introduce LAMBADA, a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. We show that LAMBADA exemplifies a wide range of linguistic phenomena, and that none of several state-of-the-art language models reaches accuracy above 1% on this novel benchmark. We thus propose LAMBADA as a challenging test set, meant to encourage the development of new models capable of genuine understanding of broad context in natural language text.
http://arxiv.org/pdf/1606.06031
Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, Raquel Fernández
cs.CL, cs.AI, cs.LG
10 pages, Accepted as a long paper for ACL 2016
null
cs.CL
20160620
20160620
[]
1606.06160
21
There are simple solutions to this problem. Notice that it is possible to fuse Step 3, Step 4, Step 6 to avoid storing intermediate results in full-precision. Apart from this, when h is monotonic, fα · h is also monotonic, the few possible values of ab k corresponds to several non-overlapping value ranges of ak, hence we can implement computation of ab k = fα(h(ak)) by several comparisons between fixed point numbers and avoid generating intermediate results. Similarly, it would also be desirable to fuse Step 11 ∼ Step 12, and Step 13 of previous iteration to avoid generation and storing of gak . The situation would be more complex when there are inter- mediate pooling layers. Nevertheless, if the pooling layer is max-pooling, we can do the fusion as quantizek function commutes with max function: quantizek(max(a, b)) = max(quantizek(a), quantizek(b))), (15) # hence again gb ak can be generated from gak by comparisons between fixed-point numbers. 6 DoReFa-Net
1606.06160#21
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients
We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1\% top-1 accuracy on ImageNet validation set. The DoReFa-Net AlexNet model is released publicly.
http://arxiv.org/pdf/1606.06160
Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, Yuheng Zou
cs.NE, cs.LG
null
null
cs.NE
20160620
20180202
[ { "id": "1502.03167" }, { "id": "1605.04711" }, { "id": "1510.03009" }, { "id": "1606.01981" }, { "id": "1602.02830" }, { "id": "1603.05279" }, { "id": "1502.02551" }, { "id": "1601.06071" }, { "id": "1512.06473" }, { "id": "1510.00149" } ]
1606.06031
22
Common nouns are still pretty frequent in LAMBADA, constituting over one third of the data. Qualitative analysis reveals a mixture of phenomena. Co-reference is again quite common (see Example (3) in Figure 1), sometimes as “par- tial” co-reference facilitated by bridging mecha- nisms (shutter–camera; Example (5)) or through the presence of a near synonym (‘lose the baby’– miscarriage; Example (1)). However, we also of- ten find other phenomena, such as the inference of prototypical participants in an event. For instance, if the passage describes someone having breakfast together with typical food and beverages (see Ex- ample (7)), subjects can guess the target word cof- fee without it having been explicitly mentioned.
1606.06031#22
The LAMBADA dataset: Word prediction requiring a broad discourse context
We introduce LAMBADA, a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. We show that LAMBADA exemplifies a wide range of linguistic phenomena, and that none of several state-of-the-art language models reaches accuracy above 1% on this novel benchmark. We thus propose LAMBADA as a challenging test set, meant to encourage the development of new models capable of genuine understanding of broad context in natural language text.
http://arxiv.org/pdf/1606.06031
Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, Raquel Fernández
cs.CL, cs.AI, cs.LG
10 pages, Accepted as a long paper for ACL 2016
null
cs.CL
20160620
20160620
[]
1606.06031
23
In contrast, verbs, adjectives, and adverbs are rare in LAMBADA. Many of those items can be guessed with local sentence context only, as shown in Figure 2(b), which also reports the POS dis- tribution of the set of items that were guessed by subjects based on the target-sentence context only (step 3 in Section 3.1). Note a higher proportion of verbs, adjectives and adverbs in the latter set in Figure 2(b). While end-of-sentence context skews input distribution in favour of nouns, subject filter- ing does show a clear differential effect for nouns vs. other POSs. Manual inspection reveals that broad context is not necessary to guess items like (a) (b) (c) 1.0 0.8 0.6 0.4 0.2 0.0 Figure 2: (a) Target word in or not in context; (b) Target word POS distribution in LAMBADA vs. data presented to human subjects (input) and items guessed with sentence context only (PN=proper noun, CN=common noun, V=verb, J=adjective, R=adverb, O=other); (c) Target word POS distribution of LAMBADA passages where the lemma of the target word is not in the context (categories as in (b)).
1606.06031#23
The LAMBADA dataset: Word prediction requiring a broad discourse context
We introduce LAMBADA, a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. We show that LAMBADA exemplifies a wide range of linguistic phenomena, and that none of several state-of-the-art language models reaches accuracy above 1% on this novel benchmark. We thus propose LAMBADA as a challenging test set, meant to encourage the development of new models capable of genuine understanding of broad context in natural language text.
http://arxiv.org/pdf/1606.06031
Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, Raquel Fernández
cs.CL, cs.AI, cs.LG
10 pages, Accepted as a long paper for ACL 2016
null
cs.CL
20160620
20160620
[]
1606.06160
23
W A G Training Complexity Inference Complexity Storage Relative Size Model A Accuracy Model B Accuracy Model C Accuracy 1 1 2 3 1 1 0.934 0.924 0.910 1 1 4 5 1 1 0.968 0.961 0.916 1 1 8 9 1 1 0.970 0.962 0.902 1 1 32 - - 1 0.971 0.963 0.921 1 2 2 4 2 1 0.909 0.930 0.900 1 2 3 5 2 1 0.968 0.964 0.934 1 2 4 6 2 1 0.975 0.969 0.939 2 1 2 6 2 2 0.927 0.928 0.909 2 1 4 10 2 2 0.969 0.957 0.904 1 2 8 10 2 1 0.975 0.971 0.946 1 2 32 - - 1 0.976 0.970 0.950 1 3 3 6 3 1 0.968 0.964 0.946 1 3 4 7 3 1 0.974 0.974 0.959 1 3 6 9 3 1 0.977 0.974 0.949 1 4 2 6 4 1 0.815 0.898 0.911 1 4 4 8 4 1 0.975 0.974 0.962 1 4 8
1606.06160#23
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients
We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1\% top-1 accuracy on ImageNet validation set. The DoReFa-Net AlexNet model is released publicly.
http://arxiv.org/pdf/1606.06160
Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, Yuheng Zou
cs.NE, cs.LG
null
null
cs.NE
20160620
20180202
[ { "id": "1502.03167" }, { "id": "1605.04711" }, { "id": "1510.03009" }, { "id": "1606.01981" }, { "id": "1602.02830" }, { "id": "1603.05279" }, { "id": "1502.02551" }, { "id": "1601.06071" }, { "id": "1512.06473" }, { "id": "1510.00149" } ]
1606.06031
24
frequent verbs (ask, answer, call), adjectives, and closed-class adverbs (now, too, well), as well as time-related adverbs (quickly, recently). In these cases, the sentence context suffices, so few of them end up in LAMBADA (although of course there are exceptions, such as Example (8), where the tar- get word is an adjective). This contrasts with other types of open-class adverbs (e.g., innocently, con- fidently), which are generally hard to guess with both local and broad context. The low propor- tion of these kinds of adverbs and of verbs among guessed items in general suggests that tracking event-related phenomena (such as script-like se- quences of events) is harder for subjects than co- referential phenomena, at least as framed in the LAMBADA task. Further research is needed to probe this hypothesis.
1606.06031#24
The LAMBADA dataset: Word prediction requiring a broad discourse context
We introduce LAMBADA, a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. We show that LAMBADA exemplifies a wide range of linguistic phenomena, and that none of several state-of-the-art language models reaches accuracy above 1% on this novel benchmark. We thus propose LAMBADA as a challenging test set, meant to encourage the development of new models capable of genuine understanding of broad context in natural language text.
http://arxiv.org/pdf/1606.06031
Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, Raquel Fernández
cs.CL, cs.AI, cs.LG
10 pages, Accepted as a long paper for ACL 2016
null
cs.CL
20160620
20160620
[]
1606.06031
25
Furthermore, we observe that, while explicit mention in the preceding discourse context is criti- cal for proper nouns, the other categories can often be guessed without having been explicitly intro- duced. This is shown in Figure 2(c), which de- picts the POS distribution of LAMBADA items for which the lemma of the target word is not in the context (corresponding to about 16% of LAMBADA in total).3 Qualitative analysis of items with verbs and adjectives as targets sug- gests that the target word, although not present in the passage, is still strongly implied by the context. In about one third of the cases examined, the missing word is “almost there”. For instance, the passage contains a word with the same root but a different part of speech (e.g., death–died in Example (6)), or a synonymous expression (as mentioned above for “miscarriage”; we find the same phenomenon for verbs, e.g., ‘deprived you of water’–dehydrated).
1606.06031#25
The LAMBADA dataset: Word prediction requiring a broad discourse context
We introduce LAMBADA, a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. We show that LAMBADA exemplifies a wide range of linguistic phenomena, and that none of several state-of-the-art language models reaches accuracy above 1% on this novel benchmark. We thus propose LAMBADA as a challenging test set, meant to encourage the development of new models capable of genuine understanding of broad context in natural language text.
http://arxiv.org/pdf/1606.06031
Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, Raquel Fernández
cs.CL, cs.AI, cs.LG
10 pages, Accepted as a long paper for ACL 2016
null
cs.CL
20160620
20160620
[]
1606.06160
25
3 EXPERIMENT RESULTS 3.1 CONFIGURATION SPACE EXPLORATION We explore the configuration space of combinations of bitwidth of weights, activations and gradients by experiments on the SVHN dataset. The SVHN dataset (Netzer et al., 2011) is a real-world digit recognition dataset consisting of photos of house numbers in Google Street View images. We consider the “cropped” format of the dataset: 32-by-32 colored images centered around a single character. There are 73257 digits for training, 26032 digits for testing, and 531131 less difficult samples which can be used as extra training data. The images are resized to 40x40 before fed into network. For convolutions in a DoReFa-Net, if we have W -bit weights, A-bit activations and G-bit gradients, the relative forward and backward computation complexity, storage relative size, can be computed from Eqn. 3 and we list them in Table 1. As it would not be computationally efficient to use bit con- volution kernels for convolutions between 32-bit numbers, and noting that previous works like BNN and XNOR-net have already compared bit convolution kernels with 32-bit convolution kernels, we will omit the complexity comparison of computation complexity for the 32-bit control experiments. 7
1606.06160#25
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients
We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1\% top-1 accuracy on ImageNet validation set. The DoReFa-Net AlexNet model is released publicly.
http://arxiv.org/pdf/1606.06160
Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, Yuheng Zou
cs.NE, cs.LG
null
null
cs.NE
20160620
20180202
[ { "id": "1502.03167" }, { "id": "1605.04711" }, { "id": "1510.03009" }, { "id": "1606.01981" }, { "id": "1602.02830" }, { "id": "1603.05279" }, { "id": "1502.02551" }, { "id": "1601.06071" }, { "id": "1512.06473" }, { "id": "1510.00149" } ]
1606.06031
26
In other cases, correct prediction requires more complex discourse inference, including guessing prototypical participants of a scene (as in the cof- fee example above), actions or events strongly sug- gested by the discourse (see Examples (1) and (10), where the mention of an icy road helps in predicting the target driving), or qualitative properties of participants or situations (see Exam- ple (8)). Of course, the same kind of discourse reasoning takes place when the target word is al- ready present in the context (cf. Examples (3) and (4)). The presence of the word in context does not make the reasoning unnecessary (the task remains challenging), but facilitates the inference. As a final observation, intriguingly, the LAM- BADA items contain (quoted) direct speech sig- nificantly more often than the input items overall (71% of LAMBADA items vs. 61% of items in the input sample), see, e.g., Examples (1) and (2). Further analysis is needed to investigate in what way more dialogic discourse might facilitate the prediction of the final target word.
1606.06031#26
The LAMBADA dataset: Word prediction requiring a broad discourse context
We introduce LAMBADA, a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. We show that LAMBADA exemplifies a wide range of linguistic phenomena, and that none of several state-of-the-art language models reaches accuracy above 1% on this novel benchmark. We thus propose LAMBADA as a challenging test set, meant to encourage the development of new models capable of genuine understanding of broad context in natural language text.
http://arxiv.org/pdf/1606.06031
Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, Raquel Fernández
cs.CL, cs.AI, cs.LG
10 pages, Accepted as a long paper for ACL 2016
null
cs.CL
20160620
20160620
[]
1606.06160
26
7 # DoReFa-Net We use the prediction accuracy of several CNN models on SVHN dataset to evaluate the efficacy of configurations. Model A is a CNN that costs about 80 FLOPs for one 40x40 image, and it consists of seven convolutional layers and one fully-connected layer. Model B, C, D is derived from Model A by reducing the number of channels for all seven convo- lutional layers by 50%, 75%, 87.5%, respectively. The listed prediction accuracy is the maximum accuracy on test set over 200 epochs. We use ADAM (Kingma & Ba, 2014) learning rule with 0.001 as learning rate. In general, having low bitwidth weights, activations and gradients will cause degradation in predic- tion accuracy. But it should be noted that low bitwidth networks will have much reduced resource requirement.
1606.06160#26
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients
We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1\% top-1 accuracy on ImageNet validation set. The DoReFa-Net AlexNet model is released publicly.
http://arxiv.org/pdf/1606.06160
Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, Yuheng Zou
cs.NE, cs.LG
null
null
cs.NE
20160620
20180202
[ { "id": "1502.03167" }, { "id": "1605.04711" }, { "id": "1510.03009" }, { "id": "1606.01981" }, { "id": "1602.02830" }, { "id": "1603.05279" }, { "id": "1502.02551" }, { "id": "1601.06071" }, { "id": "1512.06473" }, { "id": "1510.00149" } ]
1606.06031
27
3The apparent 1% of out-of-context proper nouns shown in Figure 2(c) is due to lemmatization mistakes (fictional characters for which the lemmatizer did not recognize a link between singular and plural forms, e.g., Wynn – Wynns). A manual check confirmed that all proper noun target words in LAMBADA are indeed also present in the context. In sum, LAMBADA contains a myriad of phe- nomena that, besides making it challenging from the text understanding perspective, are of great interest to the broad Computational Linguistics community. To return to Example (1), solving it requires a combination of linguistic skills rang- ing from (morpho)phonology (the plausible target word abortion is ruled out by the indefinite deter- miner a) through morphosyntax (the slot should be filled by a common singular noun) to pragmatics (understanding what the male participant is infer- ring from the female participant’s words), in addi- tion to general reasoning skills. It is not surprising, thus, that LAMBADA is so challenging for current models, as we show next. # 4 Modeling experiments
1606.06031#27
The LAMBADA dataset: Word prediction requiring a broad discourse context
We introduce LAMBADA, a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. We show that LAMBADA exemplifies a wide range of linguistic phenomena, and that none of several state-of-the-art language models reaches accuracy above 1% on this novel benchmark. We thus propose LAMBADA as a challenging test set, meant to encourage the development of new models capable of genuine understanding of broad context in natural language text.
http://arxiv.org/pdf/1606.06031
Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, Raquel Fernández
cs.CL, cs.AI, cs.LG
10 pages, Accepted as a long paper for ACL 2016
null
cs.CL
20160620
20160620
[]
1606.06160
27
In general, having low bitwidth weights, activations and gradients will cause degradation in predic- tion accuracy. But it should be noted that low bitwidth networks will have much reduced resource requirement. As balancing between multiple factors like training time, inference time, model size and accuracy is more a problem of practical trade-off, there will be no definite conclusion as which combination of (W, A, G) one should choose. Nevertheless, we find in these experiments that weights, activations and gradients are progressively more sensitive to bitwidth, and using gradients with G ≤ 4 would significantly degrade prediction accuracy. Based on these observations, we take (W, A) = (1, 2) and G ≥ 4 as rational combinations and use them for most of our experiments on ImageNet dataset.
1606.06160#27
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients
We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1\% top-1 accuracy on ImageNet validation set. The DoReFa-Net AlexNet model is released publicly.
http://arxiv.org/pdf/1606.06160
Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, Yuheng Zou
cs.NE, cs.LG
null
null
cs.NE
20160620
20180202
[ { "id": "1502.03167" }, { "id": "1605.04711" }, { "id": "1510.03009" }, { "id": "1606.01981" }, { "id": "1602.02830" }, { "id": "1603.05279" }, { "id": "1502.02551" }, { "id": "1601.06071" }, { "id": "1512.06473" }, { "id": "1510.00149" } ]
1606.06031
28
Computational methods We tested several ex- isting language models and baselines on LAM- BADA. We implemented a simple RNN (El- man, 1990), a Long Short-Term Memory network (LSTM) (Hochreiter and Schmidhuber, 1997), a traditional statistical N-Gram language model (Stolcke, 2002) with and without cache, and a Memory Network (Sukhbaatar et al., 2015). We remark that at least LSTM, Memory Network and, to a certain extent, the cache N-Gram model have, among their supposed benefits, the ability to take broader contexts into account. Note moreover that variants of RNNs and LSTMs are at the state of the art when tested on standard language model- ing benchmarks (Mikolov, 2014). Our Memory Network implementation is similar to the one with which Hill et al. (2016) reached the best results on the CBT data set (see Section 2 above). While we could not re-implement the models that per- formed best on CNNDM (see again Section 2), our LSTM is architecturally similar to the Deep LSTM Reader of Hermann et al. (2015), which achieved respectable performance on that data set. Most importantly, we will show below that
1606.06031#28
The LAMBADA dataset: Word prediction requiring a broad discourse context
We introduce LAMBADA, a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. We show that LAMBADA exemplifies a wide range of linguistic phenomena, and that none of several state-of-the-art language models reaches accuracy above 1% on this novel benchmark. We thus propose LAMBADA as a challenging test set, meant to encourage the development of new models capable of genuine understanding of broad context in natural language text.
http://arxiv.org/pdf/1606.06031
Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, Raquel Fernández
cs.CL, cs.AI, cs.LG
10 pages, Accepted as a long paper for ACL 2016
null
cs.CL
20160620
20160620
[]
1606.06160
28
Table 1 also shows that the relative number of channels significantly affect the prediction quality degradation resulting from bitwidth reduction. For example, there is no significant loss of prediction accuracy when going from 32-bit model to DoReFa-Net for Model A, which is not the case for Model C. We conjecture that “more capable” models like those with more channels will be less sensitive to bitwidth differences. On the other hand, Table 1 also suggests a method to compensate for the prediction quality degradation, by increasing bitwidth of activations for models with less channels, at the cost of increasing computation complexity for inference and training. However, optimal bitwidth of gradient seems less related to model channel numbers and prediction quality saturates with 8-bit gradients most of the time. 3.2 IMAGENET We further evaluates DoReFa-Net on ILSVRC12 (Deng et al., 2009) image classification dataset, which contains about 1.2 million high-resolution natural images for training that spans 1000 cat- egories of objects. The validation set contains 50k images. We report our single-crop evaluation result using top-1 accuracy. The images are resized to 224x224 before fed into the network.
1606.06160#28
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients
We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1\% top-1 accuracy on ImageNet validation set. The DoReFa-Net AlexNet model is released publicly.
http://arxiv.org/pdf/1606.06160
Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, Yuheng Zou
cs.NE, cs.LG
null
null
cs.NE
20160620
20180202
[ { "id": "1502.03167" }, { "id": "1605.04711" }, { "id": "1510.03009" }, { "id": "1606.01981" }, { "id": "1602.02830" }, { "id": "1603.05279" }, { "id": "1502.02551" }, { "id": "1601.06071" }, { "id": "1512.06473" }, { "id": "1510.00149" } ]
1606.06031
29
similar to the Deep LSTM Reader of Hermann et al. (2015), which achieved respectable performance on that data set. Most importantly, we will show below that most of our models reach impressive performance when tested on a more standard language modeling data set sourced from the same corpus used to build LAMBADA. This control set was constructed by randomly sampling 5K passages of the same shape and size as the ones used to build LAMBADA from the same test novels, but without filtering them in any way. Based on the control set re- sults, to be discussed below, we can reasonably claim that the models we are testing on LAM- BADA are very good at standard language model- ing, and their low performance on the latter cannot be attributed to poor quality.
1606.06031#29
The LAMBADA dataset: Word prediction requiring a broad discourse context
We introduce LAMBADA, a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. We show that LAMBADA exemplifies a wide range of linguistic phenomena, and that none of several state-of-the-art language models reaches accuracy above 1% on this novel benchmark. We thus propose LAMBADA as a challenging test set, meant to encourage the development of new models capable of genuine understanding of broad context in natural language text.
http://arxiv.org/pdf/1606.06031
Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, Raquel Fernández
cs.CL, cs.AI, cs.LG
10 pages, Accepted as a long paper for ACL 2016
null
cs.CL
20160620
20160620
[]
1606.06160
29
The results are listed in Table 2. The baseline AlexNet model that scores 55.9% single-crop top-1 accuracy is a best-effort replication of the model in (Krizhevsky et al., 2012), with the second, fourth and fifth convolutions split into two parallel blocks. We replace the Local Contrast Renormalization layer with Batch Normalization layer (Ioffe & Szegedy, 2015). We use ADAM learning rule with learning rate 10−4 at the start, and later decrease learning rate to 10−5 and consequently 10−6 when accuracy curves become flat. From the table, it can be seen that increasing bitwidth of activation from 1-bit to 2-bit and even to 4- bit, while still keep 1-bit weights, leads to significant accuracy increase, approaching the accuracy of model where both weights and activations are 32-bit. Rounding gradients to 6-bit produces similar accuracies as 32-bit gradients, in experiments of “1-1-6” v.s. “1-1-32”, “1-2-6” v.s. “1-2-32”, and “1-3-6” v.s. “1-3-32”.
1606.06160#29
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients
We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1\% top-1 accuracy on ImageNet validation set. The DoReFa-Net AlexNet model is released publicly.
http://arxiv.org/pdf/1606.06160
Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, Yuheng Zou
cs.NE, cs.LG
null
null
cs.NE
20160620
20180202
[ { "id": "1502.03167" }, { "id": "1605.04711" }, { "id": "1510.03009" }, { "id": "1606.01981" }, { "id": "1602.02830" }, { "id": "1603.05279" }, { "id": "1502.02551" }, { "id": "1601.06071" }, { "id": "1512.06473" }, { "id": "1510.00149" } ]
1606.06031
30
In order to test for strong biases in the data, we constructed Sup-CBOW, a baseline model weakly tailored to the task at hand, consisting of a simple neural network that takes as input a bag-of- word representation of the passage and attempts to predict the final word. The input representa- tion comes from adding pre-trained CBOW vec- tors (Mikolov et al., 2013) of the words in the pas- sage.4 We also considered an unsupervised vari- ant (Unsup-CBOW) where the target word is pre- dicted by cosine similarity between the passage vector and the target word vector. Finally, we evaluated several variations of a random guess- ing baseline differing in terms of the word pool to sample from. The guessed word could be picked from: the full vocabulary, the words that appear in the current passage and a random uppercased word from the passage. The latter baseline aims at exploiting the potential bias that proper names ac- count for a consistent portion of the LAMBADA data (see Figure 2 above).
1606.06031#30
The LAMBADA dataset: Word prediction requiring a broad discourse context
We introduce LAMBADA, a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. We show that LAMBADA exemplifies a wide range of linguistic phenomena, and that none of several state-of-the-art language models reaches accuracy above 1% on this novel benchmark. We thus propose LAMBADA as a challenging test set, meant to encourage the development of new models capable of genuine understanding of broad context in natural language text.
http://arxiv.org/pdf/1606.06031
Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, Raquel Fernández
cs.CL, cs.AI, cs.LG
10 pages, Accepted as a long paper for ACL 2016
null
cs.CL
20160620
20160620
[]
1606.06160
30
The rows with “initialized” means the model training has been initialized with a 32-bit model. It can be seen that there is a considerable gap between the best accuracy of a trained-from-scratch-model and an initialized model. Closing this gap is left to future work. Nevertheless, it show the potential in improving accuracy of DoReFa-Net. 3.2.1 TRAINING CURVES Figure 1 shows the evolution of accuracy v.s. epoch curves of DoReFa-Net. It can be seen that quantizing gradients to be 6-bit does not cause the training curve to be significantly different from not quantizing gradients. However, using 4-bit gradients as in “1-2-4” leads to significant accuracy degradation. 8 DoReFa-Net Table 2: Comparison of prediction accuracy for ImageNet with different choices of bitwidth in a DoReFa-Net. W , A, G are bitwidths of weights, activations and gradients respectively. Single- crop top-1 accuracy is given. Note the BNN result is reported by (Rastegari et al., 2016), not by original authors. We do not quantize the first and last layers of AlexNet to low bitwidth, as BNN and XNOR-Net do.
1606.06160#30
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients
We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1\% top-1 accuracy on ImageNet validation set. The DoReFa-Net AlexNet model is released publicly.
http://arxiv.org/pdf/1606.06160
Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, Yuheng Zou
cs.NE, cs.LG
null
null
cs.NE
20160620
20180202
[ { "id": "1502.03167" }, { "id": "1605.04711" }, { "id": "1510.03009" }, { "id": "1606.01981" }, { "id": "1602.02830" }, { "id": "1603.05279" }, { "id": "1502.02551" }, { "id": "1601.06071" }, { "id": "1512.06473" }, { "id": "1510.00149" } ]
1606.06031
31
Note that LAMBADA was designed to chal- lenge language models with harder-than-average examples where broad context understanding is crucial. However, the average case should not be disregarded either, since we want language mod- els to be able to handle both cases. For this rea- son, we trained the models entirely on unsuper- vised data and expect future work to follow sim- ilar principles. Concretely, we trained the mod- els, as is standard practice, on predicting each up- coming word given the previous context, using the LAMBADA training data (see Section 3.2 above) as input corpus. The only exception to this proce- dure was Sup-CBOW where we extracted from the training novels similar-shaped passages to those in LAMBADA and trained the model on them (about 9M passages). Again, the goal of this model was only to test for potential biases in the data and not to provide a full account for the phenomena we are testing. We restricted the vocabulary of the mod- els to the 60K most frequent words in the training set (covering 95% of the target words in the de- velopment set). The model hyperparameters were tuned on their accuracy in the development set. The same trained models were tested on the LAM- BADA and the control sets. See SM for the tuning details.
1606.06031#31
The LAMBADA dataset: Word prediction requiring a broad discourse context
We introduce LAMBADA, a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. We show that LAMBADA exemplifies a wide range of linguistic phenomena, and that none of several state-of-the-art language models reaches accuracy above 1% on this novel benchmark. We thus propose LAMBADA as a challenging test set, meant to encourage the development of new models capable of genuine understanding of broad context in natural language text.
http://arxiv.org/pdf/1606.06031
Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, Raquel Fernández
cs.CL, cs.AI, cs.LG
10 pages, Accepted as a long paper for ACL 2016
null
cs.CL
20160620
20160620
[]
1606.06160
31
W A G Training Complexity Inference Complexity Storage Relative Size AlexNet Accuracy 1 1 6 7 1 1 0.395 1 1 8 9 1 1 0.395 1 1 32 - 1 1 0.279 (BNN) 1 1 32 - 1 1 0.442 (XNOR-Net) 1 1 32 - 1 1 0.401 1 1 32 - 1 1 0.436 (initialized) 1 2 6 8 2 1 0.461 1 2 8 10 2 1 0.463 1 2 32 - 2 1 0.477 1 2 32 - 2 1 0.498 (initialized) 1 3 6 9 3 1 0.471 1 3 32 - 3 1 0.484 1 4 6 - 4 1 0.482 1 4 32 - 4 1 0.503 1 4 32 - 4 1 0.530 (initialized) 8 8 8 - - 8 0.530 32 32 32 - - 32 0.559 3.2.2 HISTOGRAM OF WEIGHTS, ACTIVATIONS AND GRADIENTS Figure 2 shows the histogram of gradients of layer “conv3” of “1-2-6” AlexNet model at epoch 5 and 35. As the histogram remains mostly unchanged with epoch number, we omit the histograms of the other epochs for clarity.
1606.06160#31
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients
We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1\% top-1 accuracy on ImageNet validation set. The DoReFa-Net AlexNet model is released publicly.
http://arxiv.org/pdf/1606.06160
Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, Yuheng Zou
cs.NE, cs.LG
null
null
cs.NE
20160620
20180202
[ { "id": "1502.03167" }, { "id": "1605.04711" }, { "id": "1510.03009" }, { "id": "1606.01981" }, { "id": "1602.02830" }, { "id": "1603.05279" }, { "id": "1502.02551" }, { "id": "1601.06071" }, { "id": "1512.06473" }, { "id": "1510.00149" } ]
1606.06031
32
Results Results of models and baselines are re- ported in Table 1. Note that the measure of interest 4http://clic.cimec.unitn.it/composes/ semantic-vectors.html Data Method Acc. Ppl. Rank baselines LAMBADA Random vocabulary word Random word from passage Random capitalized word from passage Unsup-CBOW Sup-CBOW 0 1.6 7.3 0 0 60000 - - 57040 47587 30026 - - 16352 4660 models N-Gram N-Gram w/cache RNN LSTM Memory Network 0.1 0.1 0 0 0 3125 768 14725 5357 16318 993 87 7831 324 846 baselines Control Random vocabulary word Random word from passage Random capitalized word from passage Unsup-CBOW Sup-CBOW 0 0 0 0 3.5 60000 - - 55190 2344 30453 - - 12950 259 models N-Gram N-Gram w/cache RNN LSTM Memory Network 19.1 19.1 15.4 21.9 8.5 285 270 277 149 566 17 18 24 12 46 Table 1: Results of computational methods. Accuracy is expressed in percentage.
1606.06031#32
The LAMBADA dataset: Word prediction requiring a broad discourse context
We introduce LAMBADA, a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. We show that LAMBADA exemplifies a wide range of linguistic phenomena, and that none of several state-of-the-art language models reaches accuracy above 1% on this novel benchmark. We thus propose LAMBADA as a challenging test set, meant to encourage the development of new models capable of genuine understanding of broad context in natural language text.
http://arxiv.org/pdf/1606.06031
Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, Raquel Fernández
cs.CL, cs.AI, cs.LG
10 pages, Accepted as a long paper for ACL 2016
null
cs.CL
20160620
20160620
[]
1606.06160
32
Figure 3(a) shows the histogram of weights of layer “conv3” of “1-2-6” AlexNet model at epoch 5, 15 and 35. Though the scale of the weights changes with epoch number, the distribution of weights are approximately symmetric. Figure 3(b) shows the histogram of activations of layer “conv3” of “1-2-6” AlexNet model at epoch 5, 15 and 35. The distributions of activations are stable throughout the training process. 3.3 MAKING FIRST AND LAST LAYER LOW BITWIDTH To answer the question whether the first and the last layer need to be treated specially when quan- tizing to low bitwidth, we use the same models A, B, C from Table 1 to find out if it is cost-effective to quantize the first and last layer to low bitwidth, and collect the results in Table 3. It can be seen that quantizing first and the last layer indeed leads to significant accuracy degradation, and models with less number of channels suffer more. The degradation to some extent justifies the practices of BNN and XNOR-net of not quantizing these two layers. 9 DoReFa-Net Accuracy Epoch
1606.06160#32
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients
We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1\% top-1 accuracy on ImageNet validation set. The DoReFa-Net AlexNet model is released publicly.
http://arxiv.org/pdf/1606.06160
Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, Yuheng Zou
cs.NE, cs.LG
null
null
cs.NE
20160620
20180202
[ { "id": "1502.03167" }, { "id": "1605.04711" }, { "id": "1510.03009" }, { "id": "1606.01981" }, { "id": "1602.02830" }, { "id": "1603.05279" }, { "id": "1502.02551" }, { "id": "1601.06071" }, { "id": "1512.06473" }, { "id": "1510.00149" } ]
1606.06031
33
Table 1: Results of computational methods. Accuracy is expressed in percentage. for LAMBADA is the average success of a model at predicting the target word, i.e., accuracy (unlike in standard language modeling, we know that the missing LAMBADA words can be precisely pre- dicted by humans, so good models should be able to accomplish the same feat, rather than just as- signing a high probability to them). However, as we observe a bottoming effect with accuracy, we also report perplexity and median rank of correct word, to better compare the models. As anticipated above, and in line with what we expected, all our models have very good perfor- mance when called to perform a standard language Indeed, 3 of modeling task on the control set. the models (the N-Gram models and LSTM) can guess the right word in about 1/5 of the cases. The situation drastically changes if we look at the LAMBADA results, where all models are per- forming very badly. Indeed, no model is even able to compete with the simple heuristics of pick- ing a random word from the passage, and, espe- cially, a random capitalized word (easily a proper noun). At the same time, the low performance of
1606.06031#33
The LAMBADA dataset: Word prediction requiring a broad discourse context
We introduce LAMBADA, a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. We show that LAMBADA exemplifies a wide range of linguistic phenomena, and that none of several state-of-the-art language models reaches accuracy above 1% on this novel benchmark. We thus propose LAMBADA as a challenging test set, meant to encourage the development of new models capable of genuine understanding of broad context in natural language text.
http://arxiv.org/pdf/1606.06031
Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, Raquel Fernández
cs.CL, cs.AI, cs.LG
10 pages, Accepted as a long paper for ACL 2016
null
cs.CL
20160620
20160620
[]
1606.06160
33
9 DoReFa-Net Accuracy Epoch Figure 1: Prediction accuracy of AlexNet variants on Validation Set of ImageNet indexed by epoch number. “W-A-G” gives the specification of bitwidths of weights, activations and gradients. E.g., “1-2-4” stands for the case when weights are 1-bit, activations are 2-bit and gradients are 4-bit. The figure is best viewed in color. (a) (b) Figure 2: Histogram of gradients of layer “conv3” of “1-2-6” AlexNet model at epoch 5 and 35. The y-axis is in logarithmic scale. # 4 DISCUSSION AND RELATED WORK By binarizing weights and activations, binarized neural networks like BNN and XNOR-Net have enabled acceleration of the forward pass of neural network with bit convolution kernel. However, the backward pass of binarized networks still requires convolutions between floating-point gradients and weights, which could not efficiently exploit bit convolution kernel as gradients are in general not low bitwidth numbers. 10 DoReFa-Net (b) (a)
1606.06160#33
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients
We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1\% top-1 accuracy on ImageNet validation set. The DoReFa-Net AlexNet model is released publicly.
http://arxiv.org/pdf/1606.06160
Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, Yuheng Zou
cs.NE, cs.LG
null
null
cs.NE
20160620
20180202
[ { "id": "1502.03167" }, { "id": "1605.04711" }, { "id": "1510.03009" }, { "id": "1606.01981" }, { "id": "1602.02830" }, { "id": "1603.05279" }, { "id": "1502.02551" }, { "id": "1601.06071" }, { "id": "1512.06473" }, { "id": "1510.00149" } ]
1606.06031
34
the latter heuristic in absolute terms (7% accuracy) shows that, despite the bias in favour of names in the passage, simply relying on this will not suffice to obtain good performance on LAMBADA, and models should rather pursue deeper forms of anal- ysis of the broader context (the Sup-CBOW base- line, attempting to directly exploit the passage in a shallow way, performs very poorly). This con- firms again that the difficulty of LAMBADA relies mainly on accounting for the information available in a broader context and not on the task of predict- ing the exact word missing. In comparative terms (and focusing on perplex- ity and rank, given the uniformly low accuracy results) we observe a stronger performance of the traditional N-Gram models over the neural- network-based ones, possibly pointing to the dif- ficulty of tuning the latter properly. In particu- lar, the best relative performance on LAMBADA is achieved by N-Gram w/cache, which takes pas- sage statistics into account. While even this model is effectively unable to guess the right word, it achieves a respectable perplexity of 768.
1606.06031#34
The LAMBADA dataset: Word prediction requiring a broad discourse context
We introduce LAMBADA, a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. We show that LAMBADA exemplifies a wide range of linguistic phenomena, and that none of several state-of-the-art language models reaches accuracy above 1% on this novel benchmark. We thus propose LAMBADA as a challenging test set, meant to encourage the development of new models capable of genuine understanding of broad context in natural language text.
http://arxiv.org/pdf/1606.06031
Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, Raquel Fernández
cs.CL, cs.AI, cs.LG
10 pages, Accepted as a long paper for ACL 2016
null
cs.CL
20160620
20160620
[]
1606.06160
34
10 DoReFa-Net (b) (a) Figure 3: (a) is histogram of weights of layer “conv3” of “1-2-6” AlexNet model at epoch 5, 15 and 35. There are two possible values at a specific epoch since the weights are scaled 1-bit. (b) is histogram of activation of layer “conv3” of “1-2-6” AlexNet model at epoch 5, 15 and 35. There are four possible values at a specific epoch since the activations are 2-bit.
1606.06160#34
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients
We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1\% top-1 accuracy on ImageNet validation set. The DoReFa-Net AlexNet model is released publicly.
http://arxiv.org/pdf/1606.06160
Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, Yuheng Zou
cs.NE, cs.LG
null
null
cs.NE
20160620
20180202
[ { "id": "1502.03167" }, { "id": "1605.04711" }, { "id": "1510.03009" }, { "id": "1606.01981" }, { "id": "1602.02830" }, { "id": "1603.05279" }, { "id": "1502.02551" }, { "id": "1601.06071" }, { "id": "1512.06473" }, { "id": "1510.00149" } ]
1606.06031
35
We recognize, of course, that the evaluation we performed is very preliminary, and it must only be taken as a proof-of-concept study of the difficulty of LAMBADA. Better results might be obtained simply by performing more extensive tuning, by adding more sophisticated mechanisms such as at- tention (Bahdanau et al., 2014), and so forth. Still, we would be surprised if minor modifications of the models we tested led to human-level perfor- mance on the task. We also note that, because of the way we have constructed LAMBADA, standard language mod- els are bound to fail on it by design: one of our first filters (see Section 3.1) was to choose pas- sages where a number of simple language models were failing to predict the upcoming word. How- ever, future research should find ways around this inherent difficulty. After all, humans were still able to solve this task, so a model that claims to have good language understanding ability should be able to succeed on it as well. # 5 Conclusion
1606.06031#35
The LAMBADA dataset: Word prediction requiring a broad discourse context
We introduce LAMBADA, a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. We show that LAMBADA exemplifies a wide range of linguistic phenomena, and that none of several state-of-the-art language models reaches accuracy above 1% on this novel benchmark. We thus propose LAMBADA as a challenging test set, meant to encourage the development of new models capable of genuine understanding of broad context in natural language text.
http://arxiv.org/pdf/1606.06031
Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, Raquel Fernández
cs.CL, cs.AI, cs.LG
10 pages, Accepted as a long paper for ACL 2016
null
cs.CL
20160620
20160620
[]
1606.06160
35
Table 3: Control experiments for investigation on theh degredation cost by quantizing the first con- volutional layer and the last FC layer to low bitwidth. The row with “(1, 2, 4)” stands for the baseline case of (W, A, G) = (1, 2, 4) and not quantizing the first and last layers. “+ first” means addition- ally quantizing the weights and gradients of the first convolutional layer (outputs of the first layer are already quantized in the base “(1,2,4)” scheme). “+ last” means quantizing the inputs, weights and gradients of the last FC layer. Note that outputs of the last layer do not need quantization. Scheme Model A Accuracy Model B Accuracy Model C Accuracy (1, 2, 4) 0.975 0.969 0.939 (1, 2, 4) + first 0.972 0.963 0.932 (1, 2, 4) + last 0.973 0.969 0.927 (1, 2, 4) + first + last 0.971 0.961 0.928
1606.06160#35
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients
We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1\% top-1 accuracy on ImageNet validation set. The DoReFa-Net AlexNet model is released publicly.
http://arxiv.org/pdf/1606.06160
Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, Yuheng Zou
cs.NE, cs.LG
null
null
cs.NE
20160620
20180202
[ { "id": "1502.03167" }, { "id": "1605.04711" }, { "id": "1510.03009" }, { "id": "1606.01981" }, { "id": "1602.02830" }, { "id": "1603.05279" }, { "id": "1502.02551" }, { "id": "1601.06071" }, { "id": "1512.06473" }, { "id": "1510.00149" } ]
1606.06031
36
# 5 Conclusion introduced the new LAMBADA This paper dataset, aimed at testing language models on their ability to take a broad discourse context into ac- count when predicting a word. A number of linguistic phenomena make the target words in LAMBADA easy to guess by human subjects when they can look at the whole passages they come from, but nearly impossible if only the last sentence is considered. Our preliminary experi- ments suggest that even some cutting-edge neural network approaches that are in principle able to track long-distance effects are far from passing the LAMBADA challenge. We hope the computational community will be stimulated to develop novel language models that are genuinely capturing the non-local phenomena that LAMBADA reflects. To promote research in this direction, we plan to announce a public com- petition based on the LAMBADA data.5 Our own hunch is that, despite the initially dis- appointing results of the “vanilla” Memory Net- work we tested, the ability to store information in a longer-term memory will be a crucial compo- nent of successful models, coupled with the ability to perform some kind of reasoning about what’s
1606.06031#36
The LAMBADA dataset: Word prediction requiring a broad discourse context
We introduce LAMBADA, a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. We show that LAMBADA exemplifies a wide range of linguistic phenomena, and that none of several state-of-the-art language models reaches accuracy above 1% on this novel benchmark. We thus propose LAMBADA as a challenging test set, meant to encourage the development of new models capable of genuine understanding of broad context in natural language text.
http://arxiv.org/pdf/1606.06031
Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, Raquel Fernández
cs.CL, cs.AI, cs.LG
10 pages, Accepted as a long paper for ACL 2016
null
cs.CL
20160620
20160620
[]
1606.06160
36
(Lin et al., 2015) makes a step further towards low bitwidth gradients by converting some multipli- cations to bit-shift. However, the number of additions between high bitwidth numbers remains at the same order of magnitude as before, leading to reduced overall speedup. There is also another series of work (Seide et al., 2014) that quantizes gradients before communi- cation in distributed computation settings. However, the work is more concerned with decreasing the amount of communication traffic, and does not deal with the bitwidth of gradients used in back- propagation. In particular, they use full precision gradients during the backward pass, and quantize the gradients only before sending them to other computation nodes. In contrast, we quantize gradi- ents each time before they reach the selected convolution layers during the backward pass. To the best of our knowledge, our work is the first to reduce the bitwidth of gradient to 6-bit and lower, while still achieving comparable prediction accuracy without altering other aspects of neural network model, such as increasing the number of channels, for models as large as AlexNet on ImageNet dataset. 11 DoReFa-Net # 5 CONCLUSION AND FUTURE WORK
1606.06160#36
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients
We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1\% top-1 accuracy on ImageNet validation set. The DoReFa-Net AlexNet model is released publicly.
http://arxiv.org/pdf/1606.06160
Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, Yuheng Zou
cs.NE, cs.LG
null
null
cs.NE
20160620
20180202
[ { "id": "1502.03167" }, { "id": "1605.04711" }, { "id": "1510.03009" }, { "id": "1606.01981" }, { "id": "1602.02830" }, { "id": "1603.05279" }, { "id": "1502.02551" }, { "id": "1601.06071" }, { "id": "1512.06473" }, { "id": "1510.00149" } ]
1606.06031
37
5The development set of LAMBADA, along with the training corpus, can be downloaded at http://clic. cimec.unitn.it/lambada/. The test set will be made available at the time of the competition. stored in memory, in order to retrieve the right in- formation from it. On a more general note, we believe that lever- aging human performance on word prediction is a very promising strategy to construct benchmarks for computational models that are supposed to capture various aspects of human text understand- ing. The influence of broad context as explored by LAMBADA is only one example of this idea. # Acknowledgments We are grateful to Aurelie Herbelot, Tal Linzen, Nghia The Pham and, especially, Roberto Zam- parelli for ideas and feedback. This project has received funding from the European Union’s Hori- zon 2020 research and innovation programme un- der the Marie Sklodowska-Curie grant agreement No 655577 (LOVe); ERC 2011 Starting Indepen- dent Research Grant n. 283554 (COMPOSES); NWO VIDI grant n. 276-89-008 (Asymmetry in Conversation). We gratefully acknowledge the support of NVIDIA Corporation with the donation of the GPUs used in our research. # References
1606.06031#37
The LAMBADA dataset: Word prediction requiring a broad discourse context
We introduce LAMBADA, a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. We show that LAMBADA exemplifies a wide range of linguistic phenomena, and that none of several state-of-the-art language models reaches accuracy above 1% on this novel benchmark. We thus propose LAMBADA as a challenging test set, meant to encourage the development of new models capable of genuine understanding of broad context in natural language text.
http://arxiv.org/pdf/1606.06031
Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, Raquel Fernández
cs.CL, cs.AI, cs.LG
10 pages, Accepted as a long paper for ACL 2016
null
cs.CL
20160620
20160620
[]
1606.06160
37
11 DoReFa-Net # 5 CONCLUSION AND FUTURE WORK We have introduced DoReFa-Net, a method to train a convolutional neural network that has low bitwidth weights and activations using low bitwidth parameter gradients. We find that weights and activations can be deterministically quantized while gradients need to be stochastically quantized. As most convolutions during forward/backward passes are now taking low bitwidth weights and activations/gradients respectively, DoReFa-Net can use the bit convolution kernels to accelerate both training and inference process. Our experiments on SVHN and ImageNet datasets demonstrate that DoReFa-Net can achieve comparable prediction accuracy as their 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1% top-1 accuracy on ImageNet validation set. As future work, it would be interesting to investigate using FPGA to train DoReFa-Net, as the O(B2) resource requirement of computation units for B-bit arithmetic on FPGA strongly favors low bitwidth convolutions. # REFERENCES
1606.06160#37
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients
We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1\% top-1 accuracy on ImageNet validation set. The DoReFa-Net AlexNet model is released publicly.
http://arxiv.org/pdf/1606.06160
Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, Yuheng Zou
cs.NE, cs.LG
null
null
cs.NE
20160620
20180202
[ { "id": "1502.03167" }, { "id": "1605.04711" }, { "id": "1510.03009" }, { "id": "1606.01981" }, { "id": "1602.02830" }, { "id": "1603.05279" }, { "id": "1502.02551" }, { "id": "1601.06071" }, { "id": "1512.06473" }, { "id": "1510.00149" } ]
1606.06031
38
# References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. In ICLR. Samuel Bowman, Gabor Angeli, Christopher Potts, and Christopher Manning. 2015. A large annotated cor- pus for learning natural language inference. In Pro- ceedings of EMNLP, pages 632–642, Lisbon, Portu- gal. Jeffrey L Elman. 1990. Finding structure in time. Cognitive science, 14(2):179–211. Karl Moritz Hermann, Tom´aˇs Koˇcisk´y, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa 2015. Teaching Suleyman, and Phil Blunsom. In Proceed- machines to read and comprehend. ings of NIPS, Montreal, Canada. Published https://papers.nips.cc/book/ online: advances-in-neural-information- processing-systems-28-2015.
1606.06031#38
The LAMBADA dataset: Word prediction requiring a broad discourse context
We introduce LAMBADA, a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. We show that LAMBADA exemplifies a wide range of linguistic phenomena, and that none of several state-of-the-art language models reaches accuracy above 1% on this novel benchmark. We thus propose LAMBADA as a challenging test set, meant to encourage the development of new models capable of genuine understanding of broad context in natural language text.
http://arxiv.org/pdf/1606.06031
Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, Raquel Fernández
cs.CL, cs.AI, cs.LG
10 pages, Accepted as a long paper for ACL 2016
null
cs.CL
20160620
20160620
[]
1606.06160
38
# REFERENCES Abadi, Martın, Agarwal, Ashish, Barham, Paul, Brevdo, Eugene, Chen, Zhifeng, Citro, Craig, Cor- rado, Greg S, Davis, Andy, Dean, Jeffrey, Devin, Matthieu, et al. Tensorflow: Large-scale machine learning on heterogeneous systems, 2015. Software available from tensorflow. org. Bahdanau, Dzmitry, Cho, Kyunghyun, and Bengio, Yoshua. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014. Bengio, Yoshua, L´eonard, Nicholas, and Courville, Aaron. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013. Chen, Tianshi, Du, Zidong, Sun, Ninghui, Wang, Jia, Wu, Chengyong, Chen, Yunji, and Temam, Olivier. Diannao: A small-footprint high-throughput accelerator for ubiquitous machine-learning. In ACM Sigplan Notices, volume 49, pp. 269–284. ACM, 2014a.
1606.06160#38
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients
We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1\% top-1 accuracy on ImageNet validation set. The DoReFa-Net AlexNet model is released publicly.
http://arxiv.org/pdf/1606.06160
Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, Yuheng Zou
cs.NE, cs.LG
null
null
cs.NE
20160620
20180202
[ { "id": "1502.03167" }, { "id": "1605.04711" }, { "id": "1510.03009" }, { "id": "1606.01981" }, { "id": "1602.02830" }, { "id": "1603.05279" }, { "id": "1502.02551" }, { "id": "1601.06071" }, { "id": "1512.06473" }, { "id": "1510.00149" } ]
1606.06031
39
Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2016. The Goldilocks principle: Read- ing children’s books with explicit memory repre- In Proceedings of ICLR Conference sentations. Track, San Juan, Puerto Rico. Published on- line: http://www.iclr.cc/doku.php?id= iclr2016:main. 1997. Long short-term memory. Neural Computation, 9(8):1735–178–. Yangfeng Ji, Trevor Cohn, Lingpeng Kong, Chris Dyer, 2015. Document context and Jacob Eisenstein. language models. http://arxiv.org/abs/ 1511.03962. Tomas Mikolov, Stefan Kombrink, Anoop Deoras, 2011. Lukar Burget, and Jan Honza Cernocky. Rnnlm - recurrent neural network language. In Pro- ceedings of ASRU. IEEE Automatic Speech Recog- nition and Understanding Workshop. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word represen- tations in vector space. http://arxiv.org/ abs/1301.3781/.
1606.06031#39
The LAMBADA dataset: Word prediction requiring a broad discourse context
We introduce LAMBADA, a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. We show that LAMBADA exemplifies a wide range of linguistic phenomena, and that none of several state-of-the-art language models reaches accuracy above 1% on this novel benchmark. We thus propose LAMBADA as a challenging test set, meant to encourage the development of new models capable of genuine understanding of broad context in natural language text.
http://arxiv.org/pdf/1606.06031
Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, Raquel Fernández
cs.CL, cs.AI, cs.LG
10 pages, Accepted as a long paper for ACL 2016
null
cs.CL
20160620
20160620
[]
1606.06160
39
Chen, Yunji, Luo, Tao, Liu, Shaoli, Zhang, Shijin, He, Liqiang, Wang, Jia, Li, Ling, Chen, Tianshi, Xu, Zhiwei, Sun, Ninghui, et al. Dadiannao: A machine-learning supercomputer. In Proceedings of the 47th Annual IEEE/ACM International Symposium on Microarchitecture, pp. 609–622. IEEE Computer Society, 2014b. Courbariaux, Matthieu and Bengio, Yoshua. Binarynet: Training deep neural networks with weights and activations constrained to+ 1 or-1. arXiv preprint arXiv:1602.02830, 2016. Courbariaux, Matthieu, Bengio, Yoshua, and David, Jean-Pierre. Training deep neural networks with low precision multiplications. arXiv preprint arXiv:1412.7024, 2014. Deng, Jia, Dong, Wei, Socher, Richard, Li, Li-Jia, Li, Kai, and Fei-Fei, Li. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pp. 248–255. IEEE, 2009.
1606.06160#39
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients
We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1\% top-1 accuracy on ImageNet validation set. The DoReFa-Net AlexNet model is released publicly.
http://arxiv.org/pdf/1606.06160
Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, Yuheng Zou
cs.NE, cs.LG
null
null
cs.NE
20160620
20180202
[ { "id": "1502.03167" }, { "id": "1605.04711" }, { "id": "1510.03009" }, { "id": "1606.01981" }, { "id": "1602.02830" }, { "id": "1603.05279" }, { "id": "1502.02551" }, { "id": "1601.06071" }, { "id": "1512.06473" }, { "id": "1510.00149" } ]
1606.06031
40
Tomas Mikolov, Armand Joulin, Sumit Chopra, Michael Mathieu, and Marc’Aurelio Ranzato. 2015. Learning longer memory in recurrent neural net- In Proceedings of ICLR Workshop Track, works. San Diego, CA. Published online: http://www. iclr.cc/doku.php?id=iclr2015:main. Tomas Mikolov. 2014. Using neural net- nat- Slides presented at COL- http://www.coling- works ural ING, 2014.org/COLING4\Tutorial- fix\-\TomasMikolov.pdf. for modelling and representing languages. online at Matthew Richardson, Christopher Burges, and Erin Renshaw. 2013. MCTest: A challenge dataset for the open-domain machine comprehension of text. In Proceedings of EMNLP, pages 193–203, Seattle, WA. Tim Rockt¨aschel, Edward Grefenstette, Karl Moritz Hermann, Tom´aˇs Koˇcisk´y, and Phil Blunsom. 2016. Reasoning about entailment with neural attention. In Proceedings of ICLR Conference Track, San Juan, Puerto Rico. Published online: http://www. iclr.cc/doku.php?id=iclr2016:main.
1606.06031#40
The LAMBADA dataset: Word prediction requiring a broad discourse context
We introduce LAMBADA, a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. We show that LAMBADA exemplifies a wide range of linguistic phenomena, and that none of several state-of-the-art language models reaches accuracy above 1% on this novel benchmark. We thus propose LAMBADA as a challenging test set, meant to encourage the development of new models capable of genuine understanding of broad context in natural language text.
http://arxiv.org/pdf/1606.06031
Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, Raquel Fernández
cs.CL, cs.AI, cs.LG
10 pages, Accepted as a long paper for ACL 2016
null
cs.CL
20160620
20160620
[]
1606.06160
40
Farabet, Cl´ement, LeCun, Yann, Kavukcuoglu, Koray, Culurciello, Eugenio, Martini, Berin, Ak- selrod, Polina, and Talay, Selcuk. Large-scale fpga-based convolutional networks. Scaling up Machine Learning: Parallel and Distributed Approaches, pp. 399–419, 2011. Gong, Yunchao, Liu, Liu, Yang, Ming, and Bourdev, Lubomir. Compressing deep convolutional networks using vector quantization. arXiv preprint arXiv:1412.6115, 2014. Gupta, Suyog, Agrawal, Ankur, Gopalakrishnan, Kailash, and Narayanan, Pritish. Deep learning with limited numerical precision. arXiv preprint arXiv:1502.02551, 2015. Han, Song, Mao, Huizi, and Dally, William J. Deep compression: Compressing deep neural net- works with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015a. 12 DoReFa-Net
1606.06160#40
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients
We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1\% top-1 accuracy on ImageNet validation set. The DoReFa-Net AlexNet model is released publicly.
http://arxiv.org/pdf/1606.06160
Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, Yuheng Zou
cs.NE, cs.LG
null
null
cs.NE
20160620
20180202
[ { "id": "1502.03167" }, { "id": "1605.04711" }, { "id": "1510.03009" }, { "id": "1606.01981" }, { "id": "1602.02830" }, { "id": "1603.05279" }, { "id": "1502.02551" }, { "id": "1601.06071" }, { "id": "1512.06473" }, { "id": "1510.00149" } ]
1606.06031
41
Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015. A neural network approach to context-sensitive gen- eration of conversational responses. In Proceedings of NAACL, pages 196–205, Denver, CO. Andreas Stolcke. 2002. Srilm-an extensible language modeling toolkit. In INTERSPEECH, volume 2002, page 2002. Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, End-to-end memory 2015. http://arxiv.org/abs/1503. and Rob Fergus. networks. 08895. Oriol Vinyals and Quoc Le. 2015. A neural conver- sational model. In Proceedings of the ICML Deep Learning Workshop, Lille, France. Published on- line: https://sites.google.com/site/ deeplearning2015/accepted-papers. Larger- context language modelling. http://arxiv. org/abs/1511.03729. Jason Weston, Antoine Bordes, Sumit Chopra, and Tomas Mikolov. 2015. Towards AI-complete ques- tion answering: A set of prerequisite toy tasks. http://arxiv.org/abs/1502.05698.
1606.06031#41
The LAMBADA dataset: Word prediction requiring a broad discourse context
We introduce LAMBADA, a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. We show that LAMBADA exemplifies a wide range of linguistic phenomena, and that none of several state-of-the-art language models reaches accuracy above 1% on this novel benchmark. We thus propose LAMBADA as a challenging test set, meant to encourage the development of new models capable of genuine understanding of broad context in natural language text.
http://arxiv.org/pdf/1606.06031
Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, Raquel Fernández
cs.CL, cs.AI, cs.LG
10 pages, Accepted as a long paper for ACL 2016
null
cs.CL
20160620
20160620
[]
1606.06160
41
12 DoReFa-Net Han, Song, Pool, Jeff, Tran, John, and Dally, William. Learning both weights and connections for efficient neural network. In Advances in Neural Information Processing Systems, pp. 1135–1143, 2015b. Hinton, Geoffrey, Deng, Li, Yu, Dong, Dahl, George E, Mohamed, Abdel-rahman, Jaitly, Navdeep, Senior, Andrew, Vanhoucke, Vincent, Nguyen, Patrick, Sainath, Tara N, et al. Deep neural net- works for acoustic modeling in speech recognition: The shared views of four research groups. Signal Processing Magazine, IEEE, 29(6):82–97, 2012a. Hinton, Geoffrey, Srivastava, Nitsh, and Swersky, Kevin. Neural networks for machine learning. Coursera, video lectures, 264, 2012b. Ioffe, Sergey and Szegedy, Christian. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. Kim, Minje and Smaragdis, Paris. Bitwise neural networks. arXiv preprint arXiv:1601.06071, 2016.
1606.06160#41
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients
We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1\% top-1 accuracy on ImageNet validation set. The DoReFa-Net AlexNet model is released publicly.
http://arxiv.org/pdf/1606.06160
Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, Yuheng Zou
cs.NE, cs.LG
null
null
cs.NE
20160620
20180202
[ { "id": "1502.03167" }, { "id": "1605.04711" }, { "id": "1510.03009" }, { "id": "1606.01981" }, { "id": "1602.02830" }, { "id": "1603.05279" }, { "id": "1502.02551" }, { "id": "1601.06071" }, { "id": "1512.06473" }, { "id": "1510.00149" } ]
1606.06031
42
2015. Multi- GranCNN: An architecture for general matching of text chunks on multiple levels of granularity. In Pro- ceedings of ACL, pages 63–73, Beijing, China. Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching In ICCV 2015, pages movies and reading books. 19–27. 2011. The Microsoft Research sentence completion chal- lenge. Technical Report MSR-TR-2011-129, Mi- crosoft Research.
1606.06031#42
The LAMBADA dataset: Word prediction requiring a broad discourse context
We introduce LAMBADA, a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. We show that LAMBADA exemplifies a wide range of linguistic phenomena, and that none of several state-of-the-art language models reaches accuracy above 1% on this novel benchmark. We thus propose LAMBADA as a challenging test set, meant to encourage the development of new models capable of genuine understanding of broad context in natural language text.
http://arxiv.org/pdf/1606.06031
Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, Raquel Fernández
cs.CL, cs.AI, cs.LG
10 pages, Accepted as a long paper for ACL 2016
null
cs.CL
20160620
20160620
[]
1606.06160
42
Kim, Minje and Smaragdis, Paris. Bitwise neural networks. arXiv preprint arXiv:1601.06071, 2016. Kingma, Diederik and Ba, Jimmy. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E. Imagenet classification with deep convo- lutional neural networks. In Advances in neural information processing systems, pp. 1097–1105, 2012. Li, Fengfu and Liu, Bin. Ternary weight networks. arXiv preprint arXiv:1605.04711, 2016. Lin, Zhouhan, Courbariaux, Matthieu, Memisevic, Roland, and Bengio, Yoshua. Neural networks with few multiplications. arXiv preprint arXiv:1510.03009, 2015. Merolla, Paul, Appuswamy, Rathinakumar, Arthur, John, Esser, Steve K, and Modha, Dharmendra. Deep neural networks are robust to weight binarization and other non-linear distortions. arXiv preprint arXiv:1606.01981, 2016.
1606.06160#42
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients
We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1\% top-1 accuracy on ImageNet validation set. The DoReFa-Net AlexNet model is released publicly.
http://arxiv.org/pdf/1606.06160
Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, Yuheng Zou
cs.NE, cs.LG
null
null
cs.NE
20160620
20180202
[ { "id": "1502.03167" }, { "id": "1605.04711" }, { "id": "1510.03009" }, { "id": "1606.01981" }, { "id": "1602.02830" }, { "id": "1603.05279" }, { "id": "1502.02551" }, { "id": "1601.06071" }, { "id": "1512.06473" }, { "id": "1510.00149" } ]
1606.06160
43
Netzer, Yuval, Wang, Tao, Coates, Adam, Bissacco, Alessandro, Wu, Bo, and Ng, Andrew Y. Read- ing digits in natural images with unsupervised feature learning. In NIPS workshop on deep learn- ing and unsupervised feature learning, volume 2011, pp. 5. Granada, Spain, 2011. Pham, Phi-Hung, Jelaca, Darko, Farabet, Clement, Martini, Berin, LeCun, Yann, and Culurciello, Eugenio. Neuflow: Dataflow vision processing system-on-a-chip. In Circuits and Systems (MWS- CAS), 2012 IEEE 55th International Midwest Symposium on, pp. 1044–1047. IEEE, 2012. Rastegari, Mohammad, Ordonez, Vicente, Redmon, Joseph, and Farhadi, Ali. Xnor-net: Ima- genet classification using binary convolutional neural networks. arXiv preprint arXiv:1603.05279, 2016.
1606.06160#43
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients
We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1\% top-1 accuracy on ImageNet validation set. The DoReFa-Net AlexNet model is released publicly.
http://arxiv.org/pdf/1606.06160
Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, Yuheng Zou
cs.NE, cs.LG
null
null
cs.NE
20160620
20180202
[ { "id": "1502.03167" }, { "id": "1605.04711" }, { "id": "1510.03009" }, { "id": "1606.01981" }, { "id": "1602.02830" }, { "id": "1603.05279" }, { "id": "1502.02551" }, { "id": "1601.06071" }, { "id": "1512.06473" }, { "id": "1510.00149" } ]
1606.06160
44
Seide, Frank, Fu, Hao, Droppo, Jasha, Li, Gang, and Yu, Dong. 1-bit stochastic gradient descent and its application to data-parallel distributed training of speech dnns. In INTERSPEECH, pp. 1058–1062, 2014. Vanhoucke, Vincent, Senior, Andrew, and Mao, Mark Z. Improving the speed of neural networks on cpus. In Proc. Deep Learning and Unsupervised Feature Learning NIPS Workshop, volume 1, 2011. Wu, Jiaxiang, Leng, Cong, Wang, Yuhang, Hu, Qinghao, and Cheng, Jian. Quantized convolutional neural networks for mobile devices. arXiv preprint arXiv:1512.06473, 2015. 13
1606.06160#44
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients
We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1\% top-1 accuracy on ImageNet validation set. The DoReFa-Net AlexNet model is released publicly.
http://arxiv.org/pdf/1606.06160
Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, Yuheng Zou
cs.NE, cs.LG
null
null
cs.NE
20160620
20180202
[ { "id": "1502.03167" }, { "id": "1605.04711" }, { "id": "1510.03009" }, { "id": "1606.01981" }, { "id": "1602.02830" }, { "id": "1603.05279" }, { "id": "1502.02551" }, { "id": "1601.06071" }, { "id": "1512.06473" }, { "id": "1510.00149" } ]
1606.05250
0
6 1 0 2 t c O 1 1 ] L C . s c [ 3 v 0 5 2 5 0 . 6 0 6 1 : v i X r a # SQuAD: 100,000+ Questions for Machine Comprehension of Text Pranav Rajpurkar and Jian Zhang and Konstantin Lopyrev and Percy Liang {pranavsr,zjian,klopyrev,pliang}@cs.stanford.edu Computer Science Department Stanford University # Abstract We present the Stanford Question Answer- ing Dataset (SQuAD), a new reading compre- hension dataset consisting of 100,000+ ques- tions posed by crowdworkers on a set of Wikipedia articles, where the answer to each question is a segment of text from the cor- responding reading passage. We analyze the dataset to understand the types of reason- ing required to answer the questions, lean- ing heavily on dependency and constituency trees. We build a strong logistic regression model, which achieves an F1 score of 51.0%, a significant improvement over a simple base- line (20%). However, human performance (86.8%) is much higher, indicating that the dataset presents a good challenge problem for future research. The dataset is freely available at https://stanford-qa.com.
1606.05250#0
SQuAD: 100,000+ Questions for Machine Comprehension of Text
We present the Stanford Question Answering Dataset (SQuAD), a new reading comprehension dataset consisting of 100,000+ questions posed by crowdworkers on a set of Wikipedia articles, where the answer to each question is a segment of text from the corresponding reading passage. We analyze the dataset to understand the types of reasoning required to answer the questions, leaning heavily on dependency and constituency trees. We build a strong logistic regression model, which achieves an F1 score of 51.0%, a significant improvement over a simple baseline (20%). However, human performance (86.8%) is much higher, indicating that the dataset presents a good challenge problem for future research. The dataset is freely available at https://stanford-qa.com
http://arxiv.org/pdf/1606.05250
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang
cs.CL
To appear in Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP)
null
cs.CL
20160616
20161011
[]
1606.05378
0
6 1 0 2 n u J 6 1 ] L C . s c [ 1 v 8 7 3 5 0 . 6 0 6 1 : v i X r a # Simpler Context-Dependent Logical Forms via Model Projections Reginald Long Stanford University [email protected] Panupong Pasupat Stanford University [email protected] Percy Liang Stanford University [email protected] # Abstract We consider the task of learning a context- dependent mapping from utterances to de- notations. With only denotations at train- ing time, we must search over a combina- torially large space of logical forms, which is even larger with context-dependent ut- terances. To cope with this challenge, we perform successive projections of the full model onto simpler models that operate over equivalence classes of logical forms. Though less expressive, we find that these simpler models are much faster and can be surprisingly effective. Moreover, they can be used to bootstrap the full model. Finally, we collected three new context- dependent semantic parsing datasets, and develop a new left-to-right parser. Context: Text: Pour the last green beaker into beaker 2. Then into the first beaker. Mix it. Denotation: a
1606.05378#0
Simpler Context-Dependent Logical Forms via Model Projections
We consider the task of learning a context-dependent mapping from utterances to denotations. With only denotations at training time, we must search over a combinatorially large space of logical forms, which is even larger with context-dependent utterances. To cope with this challenge, we perform successive projections of the full model onto simpler models that operate over equivalence classes of logical forms. Though less expressive, we find that these simpler models are much faster and can be surprisingly effective. Moreover, they can be used to bootstrap the full model. Finally, we collected three new context-dependent semantic parsing datasets, and develop a new left-to-right parser.
http://arxiv.org/pdf/1606.05378
Reginald Long, Panupong Pasupat, Percy Liang
cs.CL, I.2.6; I.2.7
10 pages, ACL 2016
null
cs.CL
20160616
20160616
[]
1606.05250
1
In meteorology, precipitation is any product of the condensation of atmospheric water vapor that falls under gravity. The main forms of pre- cipitation include drizzle, rain, sleet, snow, grau- pel and hail... Precipitation forms as smaller droplets coalesce via collision with other rain drops or ice crystals within a cloud. Short, in- tense periods of rain in scattered locations are called “showers”. What causes precipitation to fall? gravity What is another main form of precipitation be- sides drizzle, rain, snow, sleet and hail? graupel Where do water droplets collide with ice crystals to form precipitation? within a cloud 1 # 1 Introduction Figure 1: Question-answer pairs for a sample passage in the SQuAD dataset. Each of the answers is a segment of text from the passage.
1606.05250#1
SQuAD: 100,000+ Questions for Machine Comprehension of Text
We present the Stanford Question Answering Dataset (SQuAD), a new reading comprehension dataset consisting of 100,000+ questions posed by crowdworkers on a set of Wikipedia articles, where the answer to each question is a segment of text from the corresponding reading passage. We analyze the dataset to understand the types of reasoning required to answer the questions, leaning heavily on dependency and constituency trees. We build a strong logistic regression model, which achieves an F1 score of 51.0%, a significant improvement over a simple baseline (20%). However, human performance (86.8%) is much higher, indicating that the dataset presents a good challenge problem for future research. The dataset is freely available at https://stanford-qa.com
http://arxiv.org/pdf/1606.05250
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang
cs.CL
To appear in Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP)
null
cs.CL
20160616
20161011
[]
1606.05378
1
Context: Text: Pour the last green beaker into beaker 2. Then into the first beaker. Mix it. Denotation: a Figure 1: Our task is to learn to map a piece of text in some context to a denotation. An exam- ple from the ALCHEMY dataset is shown. In this paper, we ask: what intermediate logical form is suitable for modeling this mapping? In this paper, we propose projecting a full se- mantic parsing model onto simpler models over equivalence classes of logical form derivations. As illustrated in Figure 2, we consider the following sequence of models: # Introduction
1606.05378#1
Simpler Context-Dependent Logical Forms via Model Projections
We consider the task of learning a context-dependent mapping from utterances to denotations. With only denotations at training time, we must search over a combinatorially large space of logical forms, which is even larger with context-dependent utterances. To cope with this challenge, we perform successive projections of the full model onto simpler models that operate over equivalence classes of logical forms. Though less expressive, we find that these simpler models are much faster and can be surprisingly effective. Moreover, they can be used to bootstrap the full model. Finally, we collected three new context-dependent semantic parsing datasets, and develop a new left-to-right parser.
http://arxiv.org/pdf/1606.05378
Reginald Long, Panupong Pasupat, Percy Liang
cs.CL, I.2.6; I.2.7
10 pages, ACL 2016
null
cs.CL
20160616
20160616
[]
1606.05250
2
1 # 1 Introduction Figure 1: Question-answer pairs for a sample passage in the SQuAD dataset. Each of the answers is a segment of text from the passage. Reading Comprehension (RC), or the ability to read text and then answer questions about it, is a chal- lenging task for machines, requiring both under- standing of natural language and knowledge about the world. Consider the question “what causes pre- cipitation to fall?” posed on the passage in Figure 1. In order to answer the question, one might first lo- cate the relevant part of the passage “precipitation ... falls under gravity”, then reason that “under” refers to a cause (not location), and thus determine the cor- rect answer: “gravity”.
1606.05250#2
SQuAD: 100,000+ Questions for Machine Comprehension of Text
We present the Stanford Question Answering Dataset (SQuAD), a new reading comprehension dataset consisting of 100,000+ questions posed by crowdworkers on a set of Wikipedia articles, where the answer to each question is a segment of text from the corresponding reading passage. We analyze the dataset to understand the types of reasoning required to answer the questions, leaning heavily on dependency and constituency trees. We build a strong logistic regression model, which achieves an F1 score of 51.0%, a significant improvement over a simple baseline (20%). However, human performance (86.8%) is much higher, indicating that the dataset presents a good challenge problem for future research. The dataset is freely available at https://stanford-qa.com
http://arxiv.org/pdf/1606.05250
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang
cs.CL
To appear in Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP)
null
cs.CL
20160616
20161011
[]
1606.05378
2
# Introduction Suppose we are only told that a piece of text (a command) in some context (state of the world) has some denotation (the effect of the command)—see Figure 1 for an example. How can we build a sys- tem to learn from examples like these with no ini- tial knowledge about what any of the words mean? We start with the classic paradigm of training semantic parsers that map utterances to logical forms, which are executed to produce the deno- tation (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Wong and Mooney, 2007; Zettle- moyer and Collins, 2009; Kwiatkowski et al., 2010). More recent work learns directly from de- notations (Clarke et al., 2010; Liang, 2013; Be- rant et al., 2013; Artzi and Zettlemoyer, 2013), but in this setting, a constant struggle is to con- tain the exponential explosion of possible logical forms. With no initial lexicon and longer context- dependent texts, our situation is exacerbated.
1606.05378#2
Simpler Context-Dependent Logical Forms via Model Projections
We consider the task of learning a context-dependent mapping from utterances to denotations. With only denotations at training time, we must search over a combinatorially large space of logical forms, which is even larger with context-dependent utterances. To cope with this challenge, we perform successive projections of the full model onto simpler models that operate over equivalence classes of logical forms. Though less expressive, we find that these simpler models are much faster and can be surprisingly effective. Moreover, they can be used to bootstrap the full model. Finally, we collected three new context-dependent semantic parsing datasets, and develop a new left-to-right parser.
http://arxiv.org/pdf/1606.05378
Reginald Long, Panupong Pasupat, Percy Liang
cs.CL, I.2.6; I.2.7
10 pages, ACL 2016
null
cs.CL
20160616
20160616
[]
1606.05250
3
a critical role for driving fields forward—famous examples include ImageNet for object recognition (Deng et al., 2009) and the Penn Treebank for syntactic parsing (Marcus et al., 1993). Existing datasets for RC have one of two shortcomings: (i) those that are high in quality (Richardson et al., 2013; Berant et al., 2014) are too small for training modern data-intensive models, while (ii) those that are large (Hermann et al., 2015; Hill et al., 2015) are semi-synthetic and do not share the same character- istics as explicit reading comprehension questions. How can we get a machine to make progress on the challenging task of reading comprehension? large, realistic datasets have played Historically, To address the need for a large and high-quality reading comprehension dataset, we present the Stanford Question Answering Dataset v1.0 (SQuAD), freely available at https://stanford-qa.com, con- sisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to ev- ery question is a segment of text, or span, from the corresponding reading passage. SQuAD contains 107,785 question-answer pairs on 536 articles, and is almost two orders of magnitude larger than previ- ous manually labeled RC datasets such as MCTest (Richardson et al., 2013).
1606.05250#3
SQuAD: 100,000+ Questions for Machine Comprehension of Text
We present the Stanford Question Answering Dataset (SQuAD), a new reading comprehension dataset consisting of 100,000+ questions posed by crowdworkers on a set of Wikipedia articles, where the answer to each question is a segment of text from the corresponding reading passage. We analyze the dataset to understand the types of reasoning required to answer the questions, leaning heavily on dependency and constituency trees. We build a strong logistic regression model, which achieves an F1 score of 51.0%, a significant improvement over a simple baseline (20%). However, human performance (86.8%) is much higher, indicating that the dataset presents a good challenge problem for future research. The dataset is freely available at https://stanford-qa.com
http://arxiv.org/pdf/1606.05250
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang
cs.CL
To appear in Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP)
null
cs.CL
20160616
20161011
[]
1606.05378
3
• Model A: our full model that derives logi- cal forms (e.g., in Figure 1, the last utter- ance maps to mix(args[1][1])) compo- sitionally from the text so that spans of the ut- terance (e.g., “it”) align to parts of the logical form (e.g., args[1][1], which retrieves an argument from a previous logical form). This is based on standard semantic parsing (e.g., Zettlemoyer and Collins (2005)). • Model B: collapse all derivations with the same logical form; we map utterances to full logical forms, but without an alignment be- tween the utterance and logical forms. This “floating” approach was used in Pasupat and Liang (2015) and Wang et al. (2015). • Model C: further collapse all logical forms whose top-level arguments have the same de- notation. In other words, we map utterances
1606.05378#3
Simpler Context-Dependent Logical Forms via Model Projections
We consider the task of learning a context-dependent mapping from utterances to denotations. With only denotations at training time, we must search over a combinatorially large space of logical forms, which is even larger with context-dependent utterances. To cope with this challenge, we perform successive projections of the full model onto simpler models that operate over equivalence classes of logical forms. Though less expressive, we find that these simpler models are much faster and can be surprisingly effective. Moreover, they can be used to bootstrap the full model. Finally, we collected three new context-dependent semantic parsing datasets, and develop a new left-to-right parser.
http://arxiv.org/pdf/1606.05378
Reginald Long, Panupong Pasupat, Percy Liang
cs.CL, I.2.6; I.2.7
10 pages, ACL 2016
null
cs.CL
20160616
20160616
[]
1606.05250
4
In contrast to prior datasets, SQuAD does not provide a list of answer choices for each question. Rather, systems must select the answer from all pos- sible spans in the passage, thus needing to cope with a fairly large number of candidates. While ques- tions with span-based answers are more constrained than the more interpretative questions found in more advanced standardized tests, we still find a rich di- versity of questions and answer types in SQuAD. We develop automatic techniques based on distances in dependency trees to quantify this diversity and stratify the questions by difficulty. The span con- straint also comes with the important benefit that span-based answers are easier to evaluate than free- form answers.
1606.05250#4
SQuAD: 100,000+ Questions for Machine Comprehension of Text
We present the Stanford Question Answering Dataset (SQuAD), a new reading comprehension dataset consisting of 100,000+ questions posed by crowdworkers on a set of Wikipedia articles, where the answer to each question is a segment of text from the corresponding reading passage. We analyze the dataset to understand the types of reasoning required to answer the questions, leaning heavily on dependency and constituency trees. We build a strong logistic regression model, which achieves an F1 score of 51.0%, a significant improvement over a simple baseline (20%). However, human performance (86.8%) is much higher, indicating that the dataset presents a good challenge problem for future research. The dataset is freely available at https://stanford-qa.com
http://arxiv.org/pdf/1606.05250
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang
cs.CL
To appear in Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP)
null
cs.CL
20160616
20161011
[]
1606.05378
4
• Model C: further collapse all logical forms whose top-level arguments have the same de- notation. In other words, we map utterances Model A mix(args(1](1]) mix(args[(1}[1]) mix(pos(2)) mix(pos(2)) mix args(1][1] args{1}{1]mix mix pos(2) pos (2) mix Mix it Mix it Mix it Mix it \ J Model B \ J mix(args(1](1]) mix(pos(2)) Mix it Mix it \ Model C f mix (beaker2) Mix it Figure 2: Derivations generated for the last ut- terance in Figure 1. All derivations above ex- ecute to mix(beaker2). Model A generates anchored logical forms (derivations) where words are aligned to predicates, which leads to multiple derivations with the same logical form. Model B discards these alignments, and Model C collapses the arguments of the logical forms to denotations. to flat logical forms (e.g., mix(beaker2)), where the arguments of the top-level predi- cate are objects in the world. This model is in the spirit of Yao et al. (2014) and Bordes et al. (2014), who directly predicted concrete paths in a knowledge graph for question an- swering.
1606.05378#4
Simpler Context-Dependent Logical Forms via Model Projections
We consider the task of learning a context-dependent mapping from utterances to denotations. With only denotations at training time, we must search over a combinatorially large space of logical forms, which is even larger with context-dependent utterances. To cope with this challenge, we perform successive projections of the full model onto simpler models that operate over equivalence classes of logical forms. Though less expressive, we find that these simpler models are much faster and can be surprisingly effective. Moreover, they can be used to bootstrap the full model. Finally, we collected three new context-dependent semantic parsing datasets, and develop a new left-to-right parser.
http://arxiv.org/pdf/1606.05378
Reginald Long, Panupong Pasupat, Percy Liang
cs.CL, I.2.6; I.2.7
10 pages, ACL 2016
null
cs.CL
20160616
20160616
[]
1606.05250
5
To assess the difficulty of SQuAD, we imple- mented a logistic regression model with a range of features. We find that lexicalized and dependency tree path features are important to the performance of the model. We also find that the model perfor- mance worsens with increasing complexity of (i) an- swer types and (ii) syntactic divergence between the question and the sentence containing the answer; in- terestingly, there is no such degradation for humans. Our best model achieves an F1 score of 51.0%,1 which is much better than the sliding window base- line (20%). Over the last four months (since June 2016), we have witnessed significant improvements from more sophisticated neural network-based mod- els. For example, Wang and Jiang (2016) obtained 70.3% F1 on SQuAD v1.1 (results on v1.0 are sim- ilar). These results are still well behind human performance, which is 86.8% F1 based on inter- annotator agreement. This suggests that there is plenty of room for advancement in modeling and learning on the SQuAD dataset. 1All experimental results in this paper are on SQuAD v1.0.
1606.05250#5
SQuAD: 100,000+ Questions for Machine Comprehension of Text
We present the Stanford Question Answering Dataset (SQuAD), a new reading comprehension dataset consisting of 100,000+ questions posed by crowdworkers on a set of Wikipedia articles, where the answer to each question is a segment of text from the corresponding reading passage. We analyze the dataset to understand the types of reasoning required to answer the questions, leaning heavily on dependency and constituency trees. We build a strong logistic regression model, which achieves an F1 score of 51.0%, a significant improvement over a simple baseline (20%). However, human performance (86.8%) is much higher, indicating that the dataset presents a good challenge problem for future research. The dataset is freely available at https://stanford-qa.com
http://arxiv.org/pdf/1606.05250
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang
cs.CL
To appear in Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP)
null
cs.CL
20160616
20161011
[]
1606.05378
5
Model A excels at credit assignment: the latent derivation explains how parts of the logical form are triggered by parts of the utterance. The price is an unmanageably large search space, given that we do not have a seed lexicon. At the other end, Model C only considers a small set of logical forms, but the mapping from text to the correct logical form is more complex and harder to model. We collected three new context-dependent se- mantic parsing datasets using Amazon Mechanical Turk: ALCHEMY (Figure 1), SCENE (Figure 3), and TANGRAMS (Figure 4). Along the way, we develop a new parser which processes utterances left-to-right but can construct logical forms with- out an explicit alignment. Our empirical findings are as follows: First, Model C is surprisingly effective, mostly surpass- ing the other two given bounded computational re- sources (a fixed beam size). Second, on a synthetic dataset, with infinite beam, Model A outperforms the other two models. Third, we can bootstrap up to Model A from the projected models with finite beam. Context: Text: A man in a red shirt and orange hat leaves to the right, leaving behind a man in a bl irt in the middle. He takes a step to the left. 1 Denotation: | |
1606.05378#5
Simpler Context-Dependent Logical Forms via Model Projections
We consider the task of learning a context-dependent mapping from utterances to denotations. With only denotations at training time, we must search over a combinatorially large space of logical forms, which is even larger with context-dependent utterances. To cope with this challenge, we perform successive projections of the full model onto simpler models that operate over equivalence classes of logical forms. Though less expressive, we find that these simpler models are much faster and can be surprisingly effective. Moreover, they can be used to bootstrap the full model. Finally, we collected three new context-dependent semantic parsing datasets, and develop a new left-to-right parser.
http://arxiv.org/pdf/1606.05378
Reginald Long, Panupong Pasupat, Percy Liang
cs.CL, I.2.6; I.2.7
10 pages, ACL 2016
null
cs.CL
20160616
20160616
[]
1606.05250
6
1All experimental results in this paper are on SQuAD v1.0. Dataset Question source Formulation Size SQuAD crowdsourced RC, spans 100K in passage MCTest (Richardson et al., 2013) Algebra (Kushman et al., 2014) Science (Clark and Etzioni, 2016) crowdsourced standardized tests standardized tests RC, multiple choice computation reasoning, multiple choice 2640 514 855 WikiQA (Yang et al., 2015) TREC-QA (Voorhees and Tice, 2000) query logs query logs + human editor IR, sentence selection IR, free form 1479 3047 CNN/Daily Mail (Hermann et al., 2015) CBT (Hill et al., 2015) summary cloze cloze + RC, fill in single entity RC, fill in single word 1.4M 688K Table 1: A survey of several reading comprehension and ques- tion answering datasets. SQuAD is much larger than all datasets except the semi-synthetic cloze-style datasets, and it is similar to TREC-QA in the open-endedness of the answers. # 2 Existing Datasets We begin with a survey of existing reading com- prehension and question answering (QA) datasets, highlighting a variety of task formulation and cre- ation strategies (see Table 1 for an overview).
1606.05250#6
SQuAD: 100,000+ Questions for Machine Comprehension of Text
We present the Stanford Question Answering Dataset (SQuAD), a new reading comprehension dataset consisting of 100,000+ questions posed by crowdworkers on a set of Wikipedia articles, where the answer to each question is a segment of text from the corresponding reading passage. We analyze the dataset to understand the types of reasoning required to answer the questions, leaning heavily on dependency and constituency trees. We build a strong logistic regression model, which achieves an F1 score of 51.0%, a significant improvement over a simple baseline (20%). However, human performance (86.8%) is much higher, indicating that the dataset presents a good challenge problem for future research. The dataset is freely available at https://stanford-qa.com
http://arxiv.org/pdf/1606.05250
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang
cs.CL
To appear in Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP)
null
cs.CL
20160616
20161011
[]
1606.05378
6
Figure 3: SCENE dataset: Each person has a shirt of some color and a hat of some color. They enter, leave, move around on a stage, and trade hats. Context: id I “> “ Text: Delete the second figure. Bring it back as the first figure. ' Denotation: I t-« a 4 Figure 4: TANGRAMS dataset: One can add fig- ures, remove figures, and swap the position of fig- ures. All the figures slide to the left. # 2 Task In this section, we formalize the task and describe the new datasets we created for the task. # 2.1 Setup
1606.05378#6
Simpler Context-Dependent Logical Forms via Model Projections
We consider the task of learning a context-dependent mapping from utterances to denotations. With only denotations at training time, we must search over a combinatorially large space of logical forms, which is even larger with context-dependent utterances. To cope with this challenge, we perform successive projections of the full model onto simpler models that operate over equivalence classes of logical forms. Though less expressive, we find that these simpler models are much faster and can be surprisingly effective. Moreover, they can be used to bootstrap the full model. Finally, we collected three new context-dependent semantic parsing datasets, and develop a new left-to-right parser.
http://arxiv.org/pdf/1606.05378
Reginald Long, Panupong Pasupat, Percy Liang
cs.CL, I.2.6; I.2.7
10 pages, ACL 2016
null
cs.CL
20160616
20160616
[]