id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
listlengths 1
1
|
---|---|---|---|---|---|---|
1612.03651#3 | FastText.zip: Compressing text classification models | This paper speciï¬ cally addresses the compromise between classiï¬ cation accuracy and the model size. We extend our previous work implemented in the fastText library1. It is based on n-gram features, dimensionality reduction, and a fast approximation of the softmax classiï¬ er (Joulin et al., 2016). We show that a few key ingredients, namely feature pruning, quantization, hashing, and re- training, allow us to produce text classiï¬ cation models with tiny size, often less than 100kB when trained on several popular datasets, without noticeably sacriï¬ cing accuracy or speed. We plan to publish the code and scripts required to reproduce our results as an extension of the fastText library, thereby providing strong reproducible baselines for text classiï¬ ers that optimize the compromise between the model size and accuracy. We hope that this will help the engineering community to improve existing applications by using more efï¬ cient models. This paper is organized as follows. Section 2 introduces related work, Section 3 describes our text classiï¬ cation model and explains how we drastically reduce the model size. Section 4 shows the effectiveness of our approach in experiments on multiple text classiï¬ cation benchmarks. # 1https://github.com/facebookresearch/fastText 1 Under review as a conference paper at ICLR 2017 2 RELATED WORK Models for text classiï¬ cation. Text classiï¬ cation is a problem that has its roots in many applica- tions such as web search, information retrieval and document classiï¬ cation (Deerwester et al., 1990; Pang & Lee, 2008). Linear classiï¬ ers often obtain state-of-the-art performance while being scal- able (Agarwal et al., 2014; Joachims, 1998; Joulin et al., 2016; McCallum & Nigam, 1998). They are particularly interesting when associated with the right features (Wang & Manning, 2012). They usually require storing embeddings for words and n-grams, which makes them memory inefï¬ cient. Compression of language models. Our work is related to compression of statistical language models. Classical approaches include feature pruning based on entropy (Stolcke, 2000) and quanti- zation. | 1612.03651#2 | 1612.03651#4 | 1612.03651 | [
"1510.03009"
]
|
1612.03651#4 | FastText.zip: Compressing text classification models | Pruning aims to keep only the most important n-grams in the model, leaving out those with probability lower than a speciï¬ ed threshold. Further, the individual n-grams can be compressed by quantizing the probability value, and by storing the n-gram itself more efï¬ ciently than as a sequence of characters. Various strategies have been developed, for example using tree structures or hash functions, and are discussed in (Talbot & Brants, 2008). Compression for similarity estimation and search. There is a large body of literature on how to compress a set of vectors into compact codes, such that the comparison of two codes approxi- mates a target similarity in the original space. The typical use-case of these methods considers an indexed dataset of compressed vectors, and a query for which we want to ï¬ nd the nearest neigh- bors in the indexed set. One of the most popular is Locality-sensitive hashing (LSH) by Charikar (2002), which is a binarization technique based on random projections that approximates the cosine similarity between two vectors through a monotonous function of the Hamming distance between the two corresponding binary codes. In our paper, LSH refers to this binarization strategy2. Many subsequent works have improved this initial binarization technique, such as spectal hashing (Weiss et al., 2009), or Iterative Quantization (ITQ) (Gong & Lazebnik, 2011), which learns a rotation ma- trix minimizing the quantization loss of the binarization. We refer the reader to two recent surveys by Wang et al. (2014) and Wang et al. (2015) for an overview of the binary hashing literature. Beyond these binarization strategies, more general quantization techniques derived from Jegou et al. (2011) offer better trade-offs between memory and the approximation of a distance estimator. The Product Quantization (PQ) method approximates the distances by calculating, in the compressed do- main, the distance between their quantized approximations. This method is statistically guaranteed to preserve the Euclidean distance between the vectors within an error bound directly related to the quantization error. The original PQ has been concurrently improved by Ge et al. (2013) and Norouzi & Fleet (2013), who learn an orthogonal transform minimizing the overall quantization loss. | 1612.03651#3 | 1612.03651#5 | 1612.03651 | [
"1510.03009"
]
|
1612.03651#5 | FastText.zip: Compressing text classification models | In our paper, we will consider the Optimized Product Quantization (OPQ) variant (Ge et al., 2013). Softmax approximation The aforementioned works approximate either the Euclidean distance or the cosine similarity (both being equivalent in the case of unit-norm vectors). However, in the context of fastText, we are speciï¬ cally interested in approximating the maximum inner product involved in a softmax layer. Several approaches derived from LSH have been recently proposed to achieve this goal, such as Asymmetric LSH by Shrivastava & Li (2014), subsequently discussed by Neyshabur & Srebro (2015). In our work, since we are not constrained to purely binary codes, we resort a more traditional encoding by employing a magnitude/direction parametrization of our vectors. Therefore we only need to encode/compress an unitary d-dimensional vector, which ï¬ ts the aforementioned LSH and PQ methods well. Neural network compression models. Recently, several research efforts have been conducted to compress the parameters of architectures involved in computer vision, namely for state-of-the- art Convolutional Neural Networks (CNNs) (Han et al., 2016; Lin et al., 2015). Some use vector quantization (Gong et al., 2014) while others binarize the network (Courbariaux et al., 2016). Denil et al. (2013) show that such classiï¬ cation models are easily compressed because they are over- parametrized, which concurs with early observations by LeCun et al. (1990). 2In the literature, LSH refers to multiple distinct strategies related to the Johnson-Lindenstrauss lemma. For instance, LSH sometimes refers to a partitioning technique with random projections allowing for sublinear search via cell probes, see for instance the E2LSH variant of Datar et al. (2004). 2 # Under review as a conference paper at ICLR 2017 Some of these works both aim at reducing the model size and the speed. In our case, since the fastText classiï¬ er on which our proposal is built upon is already very efï¬ cient, we are primilarly interested in reducing the size of the model while keeping a comparable classiï¬ cation efï¬ ciency. # 3 PROPOSED APPROACH 3.1 TEXT CLASSIFICATION | 1612.03651#4 | 1612.03651#6 | 1612.03651 | [
"1510.03009"
]
|
1612.03651#6 | FastText.zip: Compressing text classification models | In the context of text classification, linear classifiers (Joulin et al. 2016) remain competitive with more sophisticated, deeper models, and are much faster to train. On top of standard tricks commonly used in linear text classification (Agarwal et al. 2014} Wang & Manning} 2012} Weinberger et al.| Joulin et al. use a low rank constraint to reduce the computation burden while sharing information between different classes. This is especially useful in the case of a large output space, where rare classes may have only a few training examples. In this paper, we focus on a similar model, that is, which minimizes the softmax loss £ over N documents: N So lms BAtn), (1) n=1 n=1 where xn is a bag of one-hot vectors and yn the label of the n-th document. In the case of a large vocabulary and a large output space, the matrices A and B are big and can require gigabytes of memory. Below, we describe how we reduce this memory usage. 3.2 BOTTOM-UP PRODUCT QUANTIZATION Product quantization is a popular method for compressed-domain approximate nearest neighbor search (Jegou et al., 2011). As a compression technique, it approximates a real-valued vector by ï¬ nding the closest vector in a pre-deï¬ ned structured set of centroids, referred to as a codebook. This codebook is not enumerated, since it is extremely large. Instead it is implicitly deï¬ ned by its structure: a d-dimensional vector x â Rd is approximated as k £= >> a(z), (2) i=1 where the different subquantizers g; : x +4 q;(a) are complementary in the sense that their respective centroids lie in distinct orthogonal subspaces, i.e., Vi 4 j, Vx,y, (gi(x)|qj(y)) = 0. In the original PQ, the subspaces are aligned with the natural axis, while OPQ learns a rotation, which amounts to alleviating this constraint and to not depend on the original coordinate system. Another way to see this is to consider that PQ splits a given vector x into k subvectors 2â | 1612.03651#5 | 1612.03651#7 | 1612.03651 | [
"1510.03009"
]
|
1612.03651#7 | FastText.zip: Compressing text classification models | , i = 1...k, each of dimension d/k: x = {x'...a"...a*], and quantizes each sub-vector using a distinct k-means quantizer. Each subvector xâ is thus mapped to the closest centroid amongst 2â centroids, where b is the number of bits required to store the quantization index of the subquantizer, typically b = 8. The reconstructed vector can take 2*â distinct reproduction values, and is stored in kb bits. PQ estimates the inner product in the compressed domain as | 1612.03651#6 | 1612.03651#8 | 1612.03651 | [
"1510.03009"
]
|
1612.03651#8 | FastText.zip: Compressing text classification models | k alyxé y=) oa(a')y. (3) i=1 This is a straightforward extension of the square L2 distance estimation of Jegou et al. (2011). In practice, the vector estimate Ë x is trivially reconstructed from the codes, i.e., from the quantization indexes, by concatenating these centroids. The two parameters involved in PQ, namely the number of subquantizers k and the number of bits b per quantization index, are typically set to k â [2, d/2], and b = 8 to ensure byte-alignment. | 1612.03651#7 | 1612.03651#9 | 1612.03651 | [
"1510.03009"
]
|
1612.03651#9 | FastText.zip: Compressing text classification models | Discussion. PQ offers several interesting properties in our context of text classiï¬ cation. Firstly, the training is very fast because the subquantizers have a small number of centroids, i.e., 256 cen- troids for b = 8. Secondly, at test time it allows the reconstruction of the vectors with almost no 3 # Under review as a conference paper at ICLR 2017 computational and memory overhead. Thirdly, it has been successfully applied in computer vision, offering much better performance than binary codes, which makes it a natural candidate to compress relatively shallow models. As observed by S´anchez & Perronnin (2011), using PQ just before the last layer incurs a very limited loss in accuracy when combined with a support vector machine. In the context of text classiï¬ cation, the norms of the vectors are widely spread, typically with a ratio of 1000 between the max and the min. Therefore kmeans performs poorly because it optimizes an absolute error objective, so it maps all low-norm vectors to 0. A simple solution is to separate the norm and the angle of the vectors and to quantize them separately. This allows a quantization with no loss of performance, yet requires an extra b bits per vector. | 1612.03651#8 | 1612.03651#10 | 1612.03651 | [
"1510.03009"
]
|
1612.03651#10 | FastText.zip: Compressing text classification models | Bottom-up strategy: re-training. The ï¬ rst works aiming at compressing CNN models like the one proposed by (Gong et al., 2014) used the reconstruction from off-the-shelf PQ, i.e., without any re-training. However, as observed in Sablayrolles et al. (2016), when using quantization methods like PQ, it is better to re-train the layers occurring after the quantization, so that the network can re-adjust itself to the quantization. There is a strong argument arguing for this re-training strategy: the square magnitude of vectors is reduced, on average, by the average quantization error for any quantizer satisfying the Lloyd conditions; see Jegou et al. (2011) for details. This suggests a bottom-up learning strategy where we ï¬ rst quantize the input matrix, then retrain and quantize the output matrix (the input matrix being frozen). Experiments in section 4 show that it is worth adopting this strategy. Memory savings with PQ. In practice, the bottom-up PQ strategy offers a compression factor of 10 without any noticeable loss of performance. Without re-training, we notice a drop in accuracy between 0.1% and 0.5%, depending on the dataset and setting; see Section 4 and the appendix. # 3.3 FURTHER TEXT SPECIFIC TRICKS The memory usage strongly depends on the size of the vocabulary, which can be large in many text classiï¬ cation tasks. While it is clear that a large part of the vocabulary is useless or redundant, directly reducing the vocabulary to the most frequent words is not satisfactory: most of the frequent words, like â theâ or â isâ | 1612.03651#9 | 1612.03651#11 | 1612.03651 | [
"1510.03009"
]
|
1612.03651#11 | FastText.zip: Compressing text classification models | are not discriminative, in contrast to some rare words, e.g., in the context of tag prediction. In this section, we discuss a few heuristics to reduce the space taken by the dictionary. They lead to major memory reduction, in extreme cases by a factor 100. We experimentally show that this drastic reduction is complementary with the PQ compression method, meaning that the combination of both strategies reduces the model size by a factor up to à 1000 for some datasets. Pruning the vocabulary. Discovering which word or n-gram must be kept to preserve the overall performance is a feature selection problem. While many approaches have been proposed to select groups of variables during training (Bach et al., 2012; Meier et al., 2008), we are interested in selecting a ï¬ xed subset of K words and ngrams from a pre-trained model. This can be achieved by selecting the K embeddings that preserve as much of the model as possible, which can be reduced to selecting the K words and ngrams associated with the highest norms. While this approach offers major memory savings, it has one drawback occurring in some particular cases: some documents may not contained any of the K best features, leading to a signiï¬ cant drop in performance. It is thus important to keep the K best features under the condition that they cover the whole training set. More formally, the problem is to ï¬ nd a subset S in the feature set V that maximizes the sum of their norms ws under the constraint that all the documents in the training set D are covered: max Sâ V sâ S ws s.t. |S| â ¤ K, P 1S â ¥ 1D, where P is a matrix such that Pds = 1 if the s-th feature is in the d-th document, and 0 otherwise. This problem is directly related to set covering problems that are NP-hard (Feige, 1998). Standard greedy approaches require the storing of an inverted index or to do multiple passes over the dataset, which is prohibitive on very large dataset (Chierichetti et al., 2010). This problem can be cast as an instance of online submodular maximization with a rank constraint (Badanidiyuru et al., 2014; | 1612.03651#10 | 1612.03651#12 | 1612.03651 | [
"1510.03009"
]
|
1612.03651#12 | FastText.zip: Compressing text classification models | 4 # Under review as a conference paper at ICLR 2017 Sogou Yahoo Yelp full 96.5 72.5 096.0 0 63.6 2 95.5 ns 63.2 62.8 907 70.5 94.5] 70.0| 62.4 94.9 69.5 62.0 2 4 8 2 4 8 2 4 8 number of bytes â Full -: PQ -+ OPQ -{: LSH,norm -O- PQ, norm -A: OPQ, norm 3 Figure 1: Accuracy as a function of the memory per vector/embedding on 3 datasets from Zhang et al. (2015). Note, an extra byte is required when we encode the norm explicitly (â normâ ). Bateni et al., 2010). In our case, we use a simple online parallelizable greedy approach: For each document, we verify if it is already covered by a retained feature and, if not, we add the feature with the highest norm to our set of retained features. If the number of features is below k, we add the features with the highest norm that have not yet been picked. | 1612.03651#11 | 1612.03651#13 | 1612.03651 | [
"1510.03009"
]
|
1612.03651#13 | FastText.zip: Compressing text classification models | Hashing trick & Bloom ï¬ lter. On small models, the dictionary can take a signiï¬ cant portion of the memory. Instead of saving it, we extend the hashing trick used in Joulin et al. (2016) to both words and n-grams. This strategy is also used in Vowpal Wabbit (Agarwal et al., 2014) in the context of online training. This allows us to save around 1-2Mb with almost no overhead at test time (just the cost of computing the hashing function). Pruning the vocabulary while using the hashing trick requires keeping a list of the indices of the K remaining buckets. At test time, a binary search over the list of indices is required. It has a complexity of O(log(K)) and a memory overhead of a few hundreds of kilobytes. Using Bloom ï¬ lters instead reduces the complexity O(1) at test time and saves a few hundred kilobytes. However, in practice, it degrades performance. | 1612.03651#12 | 1612.03651#14 | 1612.03651 | [
"1510.03009"
]
|
1612.03651#14 | FastText.zip: Compressing text classification models | # 4 EXPERIMENTS This section evaluates the quality of our model compression pipeline and compare it to other com- pression methods on different text classiï¬ cation problems, and to other compact text classiï¬ ers. Evaluation protocol and datasets. Our experimental pipeline is as follows: we train a model using fastText with the default setting unless speciï¬ ed otherwise. That is 2M buckets, a learning rate of 0.1 and 10 training epochs. The dimensionality d of the embeddings is set to powers of 2 to avoid border effects that could make the interpretation of the results more difï¬ | 1612.03651#13 | 1612.03651#15 | 1612.03651 | [
"1510.03009"
]
|
1612.03651#15 | FastText.zip: Compressing text classification models | cult. As baselines, we use Locality-Sensitive Hashing (LSH) (Charikar, 2002), PQ (Jegou et al., 2011) and OPQ (Ge et al., 2013) (the non-parametric variant). Note that we use an improved version of LSH where random orthogonal matrices are used instead of random matrix projection J´egou et al. (2008). In a ï¬ rst series of experiments, we use the 8 datasets and evaluation protocol of Zhang et al. (2015). These datasets contain few million documents and have at most 10 classes. We also explore the limit of quantization on a dataset with an extremely large output space, that is a tag dataset extracted from the YFCC100M collection (Thomee et al., 2016)3, referred to as FlickrTag in the rest of this paper. | 1612.03651#14 | 1612.03651#16 | 1612.03651 | [
"1510.03009"
]
|
1612.03651#16 | FastText.zip: Compressing text classification models | 5 # Under review as a conference paper at ICLR 2017 AG Amazon full 0 Lt re) Q O° . * Tritt tt 2 + Amazon polarity DBPedia oo ° tie mmr °O Q@ -2 Sogou Yahoo 0 Pe ie} enn wie 5 x + 4 Yelp full Yelp polarity 0 Fy ° et TT [e) - x + -2 x + 100kB IMB 10MB_ 100MB 100kB IMB 10MB_ 100MB O Full PQ â Pruned + Zhang et al. (2015) X Xiao & Cho (2016) Figure 2: Loss of accuracy as a function of the model size. We compare the compressed model with different level of pruning with NPQ and the full fastText model. We also compare with Zhang et al. (2015) and Xiao & Cho (2016). Note that the size is in log scale. 4.1 SMALL DATASETS Compression techniques. We compare three popular methods used for similarity estimation with compact codes: LSH, PQ and OPQ on the datasets released by Zhang et al. (2015). Figure 1 shows the accuracy as a function of the number of bytes used per embedding, which corresponds to the number k of subvectors in the case of PQ and OPQ. See more results in the appendix. As discussed in Section 2, LSH reproduces the cosine similarity and is therefore not adapted to un-normalized data. Therefore we only report results with normalization. Once normalized, PQ and OPQ are almost lossless even when using only k = 4 subquantizers per embedding (equivalently, bytes). We observe in practice that using k = d/2, i.e., half of the components of the embeddings, works well in practice. In the rest of the paper and if not stated otherwise, we focus on this setting. The difference between the normalized versions of PQ and OPQ is limited and depends on the dataset. Therefore we adopt the normalized PQ (NPQ) for the rest of this study, since it is faster to train. | 1612.03651#15 | 1612.03651#17 | 1612.03651 | [
"1510.03009"
]
|
1612.03651#17 | FastText.zip: Compressing text classification models | word Entropy Norm word Entropy Norm . , the and i a to it of this 1 2 3 4 5 6 7 8 9 10 354 176 179 1639 2374 970 1775 1956 2815 3275 mediocre disappointing so-so lacks worthless dreadful drm poorly uninspired worst 1399 454 2809 1244 1757 4358 6395 716 4245 402 1 2 3 4 5 6 7 8 9 10 Table 1: Best ranked words w.r.t. entropy (left) and norm (right) on the Amazon full review dataset. We give the rank for both criteria. The norm ranking ï¬ | 1612.03651#16 | 1612.03651#18 | 1612.03651 | [
"1510.03009"
]
|
1612.03651#18 | FastText.zip: Compressing text classification models | lters out words carrying little information. # 3Data available at https://research.facebook.com/research/fasttext/ 6 # Under review as a conference paper at ICLR 2017 Dataset full 64KiB 32KiB 16 KiB AG Amazon full Amazon pol. DBPedia Sogou Yahoo Yelp full Yelp pol. 65M 92.1 108M 60.0 113M 94.5 87M 98.4 73M 96.4 122M 72.1 78M 63.8 77M 95.7 91.4 58.8 93.3 98.2 96.4 70.0 63.2 95.3 90.6 56.0 92.1 98.1 96.3 69.0 62.4 94.9 89.1 52.9 89.3 97.4 95.5 69.2 58.7 93.2 Average diff. [%] 0 -0.8 -1.7 -3.5 Table 2: Performance on very small models. We use a quantization with k = 1, hashing and an extreme pruning. The last row shows the average drop of performance for different size. Pruning. Figure 2 shows the performance of our model with different sizes. | 1612.03651#17 | 1612.03651#19 | 1612.03651 | [
"1510.03009"
]
|
1612.03651#19 | FastText.zip: Compressing text classification models | We ï¬ x k = d/2 and use different pruning thresholds. NPQ offers a compression rate of à 10 compared to the full model. As the pruning becomes more agressive, the overall compression can increase up up to à 1, 000 with little drop of performance and no additional overhead at test time. In fact, using a smaller dictionary makes the model faster at test time. We also compare with character-level Convolutional Neural Networks (CNN) (Zhang et al., 2015; Xiao & Cho, 2016). They are attractive models for text classiï¬ cation because they achieve similar performance with less memory usage than linear models (Xiao & Cho, 2016). Even though fastText with the default setting uses more memory, NPQ is already on par with CNNsâ memory usage. Note that CNNs are not quantized, and it would be worth seeing how much they can be quantized with no drop of performance. Such a study is beyond the scope of this paper. Our pruning is based on the norm of the embeddings according to the guidelines of Section 3.3. Table 1 compares the ranking obtained with norms to the ranking obtained using entropy, which is commonly used in unsupervised settings Stolcke (2000). Extreme compression. Finally, in Table 2, we explore the limit of quantized model by looking at the performance obtained for models under 64KiB. Surprisingly, even at 64KiB and 32KiB, the drop of performance is only around 0.8% and 1.7% despite a compression rate of à 1, 000 â 4, 000. 4.2 LARGE DATASET: FLICKRTAG In this section, we explore the limit of compression algorithms on very large datasets. Similar to Joulin et al. (2016), we consider a hashtag prediction dataset containing 312, 116 labels. We set the minimum count for words at 10, leading to a dictionary of 1, 427, 667 words. We take 10M buckets for n-grams and a hierarchical softmax. We refer to this dataset as FlickrTag. Output encoding. We are interested in understanding how the performance degrades if the classi- ï¬ | 1612.03651#18 | 1612.03651#20 | 1612.03651 | [
"1510.03009"
]
|
1612.03651#20 | FastText.zip: Compressing text classification models | er is also quantized (i.e., the matrix B in Eq. 1) and when the pruning is at the limit of the minimum number of features required to cover the full dataset. Model k norm retrain Acc. Size full (uncompressed) 45.4 12 GiB Input Input Input Input+Output Input+Output 128 128 128 128 128 x x x x x x 45.0 45.3 45.5 45.2 45.4 1.7 GiB 1.8 GiB 1.8 GiB 1.5 GiB 1.5 GiB Table 3: FlickrTag: Inï¬ | 1612.03651#19 | 1612.03651#21 | 1612.03651 | [
"1510.03009"
]
|
1612.03651#21 | FastText.zip: Compressing text classification models | uence of quantizing the output matrix on performance. We use PQ for quantization with an optional normalization. We also retrain the output matrix after quantizing the input one. The â normâ refers to the separate encoding of the magnitude and angle, while â retrainâ refers to the re-training bottom-up PQ method described in Section 3.2. 7 # Under review as a conference paper at ICLR 2017 Table 3 shows that quantizing both the â inputâ matrix (i.e., A in Eq. 1) and the â outputâ matrix (i.e., B) does not degrade the performance compared to the full model. We use embeddings with d = 256 dimensions and use k = d/2 subquantizers. We do not use any text speciï¬ c tricks, which leads to a compression factor of 8. Note that even if the output matrix is not retrained over the embeddings, the performance is only 0.2% away from the full model. As shown in the Appendix, using less subquantizers signiï¬ cantly decreases the performance for a small memory gain. Model full Entropy pruning Norm pruning Max-Cover pruning #embeddings Memory Coverage [%] 2M 11.5M 12GiB 297MiB 174MiB 305MiB 179MiB 305MiB 179MiB 73.2 88.4 2M 1M 1M 2M 70.5 70.5 61.9 88.4 1M 88.4 Accuracy 45.4 32.1 30.5 41.6 35.8 45.5 43.9 Table 4: | 1612.03651#20 | 1612.03651#22 | 1612.03651 | [
"1510.03009"
]
|
1612.03651#22 | FastText.zip: Compressing text classification models | FlickrTag: Comparison of entropy pruning, norm pruning and max-cover pruning methods. We show the coverage of the test set for each method. Pruning. Table 4 shows how the performance evolves with pruning. We measure this effect on top of a fully quantized model. The full model misses 11.6% of the test set because of missing words (some documents are either only composed of hashtags or have only rare words). There are 312, 116 labels and thus it seems reasonable to keep embeddings in the order of the million. A naive pruning with 1M features misses about 30 â 40% of the test set, leading to a signiï¬ cant drop of performance. On the other hand, even though the max-coverage pruning approach was set on the train set, it does not suffer from any coverage loss on the test set. This leads to a smaller drop of performance. If the pruning is too aggressive, however, the coverage decreases signiï¬ cantly. # 5 FUTURE WORK It may be possible to obtain further reduction of the model size in the future. One idea is to condition the size of the vectors (both for the input features and the labels) based on their frequency (Chen et al., 2015; Grave et al., 2016). For example, it is probably not worth representing the rare labels by full 256-dimensional vectors in the case of the FlickrTag dataset. Thus, conditioning the vector size on the frequency and norm seems like an interesting direction to explore in the future. We may also consider combining the entropy and norm pruning criteria: instead of keeping the features in the model based just on the frequency or the norm, we can use both to keep a good set of features. This could help to keep features that are both frequent and discriminative, and thereby to reduce the coverage problem that we have observed. Additionally, instead of pruning out the less useful features, we can decompose them into smaller units (Mikolov et al., 2012). For example, this can be achieved by splitting every non-discriminative word into a sequence of character trigrams. This could help in cases where training and test examples are very short (for example just a single word). | 1612.03651#21 | 1612.03651#23 | 1612.03651 | [
"1510.03009"
]
|
1612.03651#23 | FastText.zip: Compressing text classification models | # 6 CONCLUSION In this paper, we have presented several simple techniques to reduce, by several orders of magnitude, the memory complexity of certain text classiï¬ ers without sacriï¬ cing accuracy nor speed. This is achieved by applying discriminative pruning which aims to keep only important features in the trained model, and by performing quantization of the weight matrices and hashing of the dictionary. We will publish the code as an extension of the fastText library. We hope that our work will serve as a baseline to the research community, where there is an increasing interest for comparing the performance of various deep learning text classiï¬ | 1612.03651#22 | 1612.03651#24 | 1612.03651 | [
"1510.03009"
]
|
1612.03651#24 | FastText.zip: Compressing text classification models | ers for a given number of parameters. Overall, compared to recent work based on convolutional neural networks, fastText.zip is often more accurate, while requiring several orders of magnitude less time to train on common CPUs, and incurring a fraction of the memory complexity. 8 # Under review as a conference paper at ICLR 2017 # REFERENCES Alekh Agarwal, Olivier Chapelle, Miroslav Dud´ık, and John Langford. A reliable effective terascale linear learning system. Journal of Machine Learning Research, 15(1):1111â 1133, 2014. Francis Bach, Rodolphe Jenatton, Julien Mairal, and Guillaume Obozinski. | 1612.03651#23 | 1612.03651#25 | 1612.03651 | [
"1510.03009"
]
|
1612.03651#25 | FastText.zip: Compressing text classification models | Optimization with sparsity-inducing penalties. Foundations and Trends®) in Machine Learning, 4(1):1-106, 2012. Ashwinkumar Badanidiyuru, Baharan Mirzasoleiman, Amin Karbasi, and Andreas Krause. Stream- ing submodular maximization: Massive data summarization on the ï¬ y. In SIGKDD, pp. 671â 680. ACM, 2014. | 1612.03651#24 | 1612.03651#26 | 1612.03651 | [
"1510.03009"
]
|
1612.03651#26 | FastText.zip: Compressing text classification models | Mohammad Hossein Bateni, Mohammad Taghi Hajiaghayi, and Morteza Zadimoghaddam. Sub- modular secretary problem and extensions. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, pp. 39â 52. Springer, 2010. Moses S. Charikar. Similarity estimation techniques from rounding algorithms. In STOC, pp. 380â 388, May 2002. Welin Chen, David Grangier, and Michael Auli. Strategies for training large vocabulary neural language models. arXiv preprint arXiv:1512.04906, 2015. Flavio Chierichetti, Ravi Kumar, and Andrew Tomkins. Max-cover in map-reduce. | 1612.03651#25 | 1612.03651#27 | 1612.03651 | [
"1510.03009"
]
|
1612.03651#27 | FastText.zip: Compressing text classification models | In International Conference on World Wide Web, 2010. Matthieu Courbariaux, Itay Hubara, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Binarized neural networks: Training neural networks with weights and activations constrained to +1 or -1. arXiv preprint arXiv:1602.02830, 2016. M. Datar, N. Immorlica, P. Indyk, and V.S. Mirrokni. | 1612.03651#26 | 1612.03651#28 | 1612.03651 | [
"1510.03009"
]
|
1612.03651#28 | FastText.zip: Compressing text classification models | Locality-sensitive hashing scheme based on p- stable distributions. In Proceedings of the Symposium on Computational Geometry, pp. 253â 262, 2004. Scott Deerwester, Susan T Dumais, George W Furnas, Thomas K Landauer, and Richard Harshman. Indexing by latent semantic analysis. Journal of the American society for information science, 1990. Misha Denil, Babak Shakibi, Laurent Dinh, Marc-Aurelio Ranzato, and Nando et all de Freitas. | 1612.03651#27 | 1612.03651#29 | 1612.03651 | [
"1510.03009"
]
|
1612.03651#29 | FastText.zip: Compressing text classification models | Predicting parameters in deep learning. In NIPS, pp. 2148â 2156, 2013. Uriel Feige. A threshold of ln n for approximating set cover. JACM, 45(4):634â 652, 1998. Tiezheng Ge, Kaiming He, Qifa Ke, and Jian Sun. Optimized product quantization for approximate nearest neighbor search. In CVPR, June 2013. Yunchao Gong and Svetlana Lazebnik. Iterative quantization: A procrustean approach to learning binary codes. In CVPR, June 2011. | 1612.03651#28 | 1612.03651#30 | 1612.03651 | [
"1510.03009"
]
|
1612.03651#30 | FastText.zip: Compressing text classification models | Yunchao Gong, Liu Liu, Ming Yang, and Lubomir Bourdev. Compressing deep convolutional net- works using vector quantization. arXiv preprint arXiv:1412.6115, 2014. Edouard Grave, Armand Joulin, Moustapha Ciss´e, David Grangier, and Herv´e J´egou. Efï¬ cient softmax approximation for gpus. arXiv preprint arXiv:1609.04309, 2016. | 1612.03651#29 | 1612.03651#31 | 1612.03651 | [
"1510.03009"
]
|
1612.03651#31 | FastText.zip: Compressing text classification models | Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. In ICLR, 2016. Herv´e J´egou, Matthijs Douze, and Cordelia Schmid. Hamming embedding and weak geometric consistency for large scale image search. In ECCV, October 2008. Herv´e Jegou, Matthijs Douze, and Cordelia Schmid. | 1612.03651#30 | 1612.03651#32 | 1612.03651 | [
"1510.03009"
]
|
1612.03651#32 | FastText.zip: Compressing text classification models | Product quantization for nearest neighbor search. IEEE Trans. PAMI, January 2011. Thorsten Joachims. Text categorization with support vector machines: Learning with many relevant features. Springer, 1998. 9 # Under review as a conference paper at ICLR 2017 Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. Bag of tricks for efï¬ cient text classiï¬ cation. arXiv preprint arXiv:1607.01759, 2016. Yann LeCun, John S Denker, and Sara A Solla. Optimal brain damage. NIPS, 2:598â 605, 1990. Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, and Yoshua Bengio. Neural networks with few multiplications. arXiv preprint arXiv:1510.03009, 2015. Andrew McCallum and Kamal Nigam. A comparison of event models for naive bayes text classiï¬ | 1612.03651#31 | 1612.03651#33 | 1612.03651 | [
"1510.03009"
]
|
1612.03651#33 | FastText.zip: Compressing text classification models | - cation. In AAAI workshop on learning for text categorization, 1998. Lukas Meier, Sara Van De Geer, and Peter B¨uhlmann. The group lasso for logistic regression. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 70(1):53â 71, 2008. Tomas Mikolov. Statistical language models based on neural networks. In PhD thesis. VUT Brno, 2012. Tomas Mikolov, Ilya Sutskever, Anoop Deoras, Hai-Son Le, Stefan Kombrink, and J Cernocky. Subword language modeling with neural networks. preprint, 2012. Behnam Neyshabur and Nathan Srebro. On symmetric and asymmetric lshs for inner product search. In ICML, pp. 1926â | 1612.03651#32 | 1612.03651#34 | 1612.03651 | [
"1510.03009"
]
|
1612.03651#34 | FastText.zip: Compressing text classification models | 1934, 2015. Mohammad Norouzi and David Fleet. Cartesian k-means. In CVPR, June 2013. Bo Pang and Lillian Lee. Opinion mining and sentiment analysis. Foundations and trends in infor- mation retrieval, 2008. Alexandre Sablayrolles, Matthijs Douze, Herv´e J´egou, and Nicolas Usunier. How should we evalu- ate supervised hashing? arXiv preprint arXiv:1609.06753, 2016. Jorge S´anchez and Florent Perronnin. High-dimensional signature compression for large-scale im- age classiï¬ | 1612.03651#33 | 1612.03651#35 | 1612.03651 | [
"1510.03009"
]
|
1612.03651#35 | FastText.zip: Compressing text classification models | cation. In CVPR, 2011. Anshumali Shrivastava and Ping Li. Asymmetric LSH for sublinear time maximum inner product search. In NIPS, pp. 2321â 2329, 2014. Andreas Stolcke. Entropy-based pruning of backoff language models. arXiv preprint cs/0006025, 2000. David Talbot and Thorsten Brants. Randomized language models via perfect hash functions. In ACL, 2008. Bart Thomee, David A Shamma, Gerald Friedland, Benjamin Elizalde, Karl Ni, Douglas Poland, Damian Borth, and Li-Jia Li. Yfcc100m: The new data in multimedia research. In Communica- tions of the ACM, 2016. Jingdong Wang, Heng Tao Shen, Jingkuan Song, and Jianqiu Ji. Hashing for similarity search: A survey. arXiv preprint arXiv:1408.2927, 2014. Jun Wang, Wei Liu, Sanjiv Kumar, and Shih-Fu Chang. Learning to hash for indexing big data - A survey. CoRR, abs/1509.05472, 2015. Sida Wang and Christopher D Manning. Baselines and bigrams: Simple, good sentiment and topic classiï¬ cation. In ACL, 2012. Kilian Q Weinberger, Anirban Dasgupta, John Langford, Alex Smola, and Josh Attenberg. | 1612.03651#34 | 1612.03651#36 | 1612.03651 | [
"1510.03009"
]
|
1612.03651#36 | FastText.zip: Compressing text classification models | Feature hashing for large scale multitask learning. In ICML, 2009. Yair Weiss, Antonio Torralba, and Rob Fergus. Spectral hashing. In NIPS, December 2009. Yijun Xiao and Kyunghyun Cho. Efï¬ cient character-level document classiï¬ cation by combining convolution and recurrent layers. arXiv preprint arXiv:1602.00367, 2016. Xiang Zhang, Junbo Zhao, and Yann LeCun. Character-level convolutional networks for text clas- siï¬ cation. In NIPS, 2015. 10 # Under review as a conference paper at ICLR 2017 # APPENDIX In the appendix, we show some additional results. The model used in these experiments only had 1M ngram buckets. In Table 5, we show a thorough comparison of LSH, PQ and OPQ on 8 different datasets. Table 7 summarizes the comparison with CNNs in terms of accuracy and size. Table 8 show a thorough comparison of the hashing trick and the Bloom ï¬ lters. Quant. m r o k n AG Amz. f. Amz. p. DBP Sogou Yah. Yelp f. | 1612.03651#35 | 1612.03651#37 | 1612.03651 | [
"1510.03009"
]
|
1612.03651#37 | FastText.zip: Compressing text classification models | Yelp p. full full,nodict 92.1 36M 59.8 97M 94.5 104M 98.4 67M 96.3 47M 92.1 34M 59.9 78M 94.5 72 83M 98.4 56M 96.3 42M 72.2 120M 63.7 56M 95.7 53M 91M 63.6 48M 95.6 46M LSH PQ OPQ LSH PQ OPQ 8 8 8 8 8 8 x x x 88.7 8.5M 51.3 20M 90.3 91.7 8.5M 59.3 20M 94.4 91.9 8.5M 59.3 20M 94.4 91.9 9.5M 59.4 22M 94.5 92.0 9.5M 59.8 22M 94.5 92.1 9.5M 59.9 22M 94.5 21M 92.7 14M 94.2 11M 54.8 21M 97.4 14M 96.1 11M 71.3 21M 96.9 14M 95.8 11M 71.4 24M 97.8 16M 96.2 12M 71.6 24M 98.4 16M 96.3 12M 72.1 24M 98.4 16M 96.3 12M 72.2 23M 56.7 12M 92.2 12M 23M 62.8 12M 95.4 12M 23M 62.5 12M 95.4 12M 26M 63.4 14M 95.6 13M 26M 63.7 14M 95.6 13M 26M 63.6 14M 95.6 13M LSH PQ OPQ LSH PQ OPQ 4 4 4 4 4 4 x x x 88.3 4.3M 50.5 9.7M 88.9 91.6 4.3M 59.2 9.7M 94.4 91.7 4.3M 59.0 9.7M 94.4 92.1 5.3M 59.2 13M 94.4 92.1 5.3M 59.8 13M 94.5 92.2 5.3M 59.8 13M 94.5 11M 91.6 7.0M 94.3 5.3M 54.6 11M 96.3 7.0M 96.1 5.3M 71.0 11M 96.9 7.0M 95.6 5.3M 71.2 13M 97.7 8.8M 96.2 6.6M 71.1 13M 98.4 8.8M 96.3 6.6M 72.0 13M 98.3 8.8M 96.3 6.6M 72.1 12M 56.5 6.0M 92.9 5.7M 12M 62.2 6.0M 95.4 5.7M 12M 62.6 6.0M 95.4 5.7M 15M 63.1 7.4M 95.5 7.2M 15M 63.6 7.5M 95.6 7.2M 15M 63.7 7.5M 95.6 7.2M LSH PQ OPQ LSH PQ OPQ 2 2 2 2 2 2 x x x 87.7 2.2M 50.1 4.9M 88.9 5.2M 90.6 3.5M 93.9 2.7M 51.4 5.7M 56.6 3.0M 91.3 2.9M 91.1 2.2M 58.7 4.9M 94.4 5.2M 87.1 3.6M 95.3 2.7M 69.5 5.7M 62.1 3.0M 95.4 2.9M 91.4 2.2M 58.2 4.9M 94.3 5.2M 91.6 3.6M 94.2 2.7M 69.6 5.7M 62.1 3.0M 95.4 2.9M 91.8 3.2M 58.6 7.3M 94.3 7.8M 97.1 5.3M 96.1 4.0M 69.7 8.6M 62.7 4.5M 95.5 4.3M 91.9 3.2M 59.6 7.3M 94.5 7.8M 98.1 5.3M 96.3 4.0M 71.3 8.6M 63.4 4.5M 95.6 4.3M 92.1 3.2M 59.5 7.3M 94.5 7.8M 98.1 5.3M 96.2 4.0M 71.5 8.6M 63.4 4.5M 95.6 4.3M | 1612.03651#36 | 1612.03651#38 | 1612.03651 | [
"1510.03009"
]
|
1612.03651#38 | FastText.zip: Compressing text classification models | Table 5: Comparison between standard quantization methods. The original model has a dimension- ality of 8 and 2M buckets. Note that all of the methods are without dictionary. k co AG Amz. f. Amz. p. DBP Sogou Yah. Yelp f. Yelp p. full, nodict full 8 full 4 full 2 92.1 34M 59.8 78M 94.5 83M 98.4 56M 96.3 42M 72.2 91M 63.7 48M 95.6 46M 92.0 9.5M 59.8 22M 94.5 24M 98.4 16M 96.3 12M 72.1 26M 63.7 14M 95.6 13M 92.1 5.3M 59.8 13M 94.5 13M 98.4 8.8M 96.3 6.6M 72 15M 63.6 7.5M 95.6 7.2M 91.9 3.2M 59.6 7.3M 94.5 7.8M 98.1 5.3M 96.3 4.0M 71.3 8.6M 63.4 4.5M 95.6 4.3M 8 8 8 8 200K 92.0 2.5M 59.7 2.5M 94.3 2.5M 98.5 2.5M 96.6 2.5M 71.8 2.5M 63.3 2.5M 95.6 2.5M 100K 91.9 1.3M 59.5 1.3M 94.3 1.3M 98.5 1.3M 96.6 1.3M 71.6 1.3M 63.4 1.3M 95.6 1.3M 50K 91.7 645K 59.7 645K 94.3 644K 98.5 645K 96.6 645K 71.5 645K 63.2 645K 95.6 644K 10K 91.3 137K 58.6 137K 93.2 137K 98.5 137K 96.5 137K 71.3 137K 63.3 137K 95.4 137K 4 4 4 4 200K 92.0 1.8M 59.7 1.8M 94.3 1.8M 98.5 1.8M 96.6 1.8M 71.7 1.8M 63.3 1.8M 95.6 1.8M 100K 91.9 889K 59.5 889K 94.4 889K 98.5 889K 96.6 889K 71.7 889K 63.4 889K 95.6 889K 50K 91.7 449K 59.6 449K 94.3 449K 98.5 450K 96.6 449K 71.4 450K 63.2 449K 95.5 449K 98K 98K 10K 91.5 98K 58.6 98K 93.2 98K 98.5 96.5 98K 71.2 98K 63.3 98K 95.4 2 2 2 2 200K 91.9 1.4M 59.6 1.4M 94.3 1.4M 98.4 1.4M 96.5 1.4M 71.5 1.4M 63.2 1.4M 95.5 1.4M 100K 91.6 693K 59.5 693K 94.3 693K 98.4 694K 96.6 693K 71.1 694K 63.2 693K 95.6 693K 50K 91.6 352K 59.6 352K 94.3 352K 98.4 352K 96.5 352K 71.1 352K 63.2 352K 95.6 352K 78K 79K 10K 91.3 78K 58.5 78K 93.2 78K 98.4 96.5 78K 70.8 78K 63.2 78K 95.3 | 1612.03651#37 | 1612.03651#39 | 1612.03651 | [
"1510.03009"
]
|
1612.03651#39 | FastText.zip: Compressing text classification models | Table 6: Comparison with different quantization and level of pruning. â coâ is the cut-off parameter of the pruning. 11 # Under review as a conference paper at ICLR 2017 Dataset Zhang et al. (2015) Xiao & Cho (2016) AG Amz. f. Amz. p. DBP Sogou Yah. Yelp f. Yelp p. 90.2 59.5 94.5 98.3 95.1 70.5 61.6 94.8 108M 10.8M 10.8M 108M 108M 108M 108M 108M 91.4 59.2 94.1 98.6 95.2 71.4 61.8 94.5 80M 1.6M 1.6M 1.2M 1.6M 80M 1.4M 1.2M 91.9 59.6 94.3 98.5 96.5 71.7 63.3 95.5 889K 449K 449K 98K 98K 889K 98K 449K Table 7: | 1612.03651#38 | 1612.03651#40 | 1612.03651 | [
"1510.03009"
]
|
1612.03651#40 | FastText.zip: Compressing text classification models | Comparison between CNNs and fastText with and without quantization. The numbers for Zhang et al. (2015) are reported from Xiao & Cho (2016). Note that for the CNNs, we report the size of the model under the assumption that they use ï¬ oat32 storage. For fastText(+PQ) we report the memory used in RAM at test time. m o o l Quant. B full,nodict NPQ NPQ NPQ NPQ NPQ NPQ NPQ NPQ x x x x co AG Amz. f. Amz. p. DBP Sogou Yah. Yelp f. | 1612.03651#39 | 1612.03651#41 | 1612.03651 | [
"1510.03009"
]
|
1612.03651#41 | FastText.zip: Compressing text classification models | Yelp p. 92.1 34M 59.8 78M 94.5 83M 98.4 56M 96.3 42M 72.2 91M 63.7 48M 95.6 46M 200K 91.9 1.4M 59.6 1.4M 94.3 1.4M 98.4 1.4M 96.5 1.4M 71.5 1.4M 63.2 1.4M 95.5 1.4M 200K 92.2 830K 59.3 830K 94.1 830K 98.4 830K 96.5 830K 70.7 830K 63.0 830K 95.5 830K 100K 91.6 693K 59.5 693K 94.3 693K 98.4 694K 96.6 693K 71.1 694K 63.2 693K 95.6 693K 100K 91.8 420K 59.1 420K 93.9 420K 98.4 420K 96.5 420K 70.6 420K 62.8 420K 95.3 420K 50K 91.6 352K 59.6 352K 94.3 352K 98.4 352K 96.5 352K 71.1 352K 63.2 352K 95.6 352K 50K 91.5 215K 58.8 215K 93.6 215K 98.3 215K 96.5 215K 70.1 215K 62.7 215K 95.1 215K 78K 10K 91.3 51K 10K 90.8 78K 58.5 51K 56.8 78K 93.2 51K 91.7 78K 98.4 51K 98.1 79K 96.5 51K 96.1 78K 70.8 51K 68.7 78K 63.2 51K 61.7 78K 95.3 51K 94.5 | 1612.03651#40 | 1612.03651#42 | 1612.03651 | [
"1510.03009"
]
|
1612.03651#42 | FastText.zip: Compressing text classification models | Table 8: Comparison with and without Bloom ï¬ lters. For NPQ, we set d = 8 and k = 2. 12 # Under review as a conference paper at ICLR 2017 Model k norm retrain Acc. Size full 45.4 12G 128 Input 128 Input 128 Input 128 Input+Output 128 Input+Output 128 Input+Output, co=2M Input+Output, n co=1M 128 x x x x x x x x x x 45.0 45.3 45.5 45.2 45.4 45.5 43.9 1.7G 1.8G 1.8G 1.5G 1.5G 305M 179M Input Input Input Input+Output Input+Output Input+Output, co=2M Input+Output, co=1M Input+Output, co=2M Input+Output, co=1M 64 64 64 64 64 64 64 64 64 x x x x x x x x x x x 44.0 44.7 44.9 44.6 44.8 42.5 39.9 45.0 43.4 1.1G 1.1G 1.1G 784M 784M 183M 118M 183M 118M x x Table 9: | 1612.03651#41 | 1612.03651#43 | 1612.03651 | [
"1510.03009"
]
|
1612.03651#43 | FastText.zip: Compressing text classification models | FlickrTag: Comparison for a large dataset of (i) different quantization methods and param- eters, (ii) with or without re-training. 13 | 1612.03651#42 | 1612.03651 | [
"1510.03009"
]
|
|
1612.03801#0 | DeepMind Lab | 6 1 0 2 c e D 3 1 ] I A . s c [ 2 v 1 0 8 3 0 . 2 1 6 1 : v i X r a # DeepMind Lab Charles Beattie, Joel Z. Leibo, Denis Teplyashin, Tom Ward, Marcus Wainwright, Heinrich Küttler, Andrew Lefrancq, Simon Green, Và ctor Valdés, Amir Sadik, Julian Schrittwieser, Keith Anderson, Sarah York, Max Cant, Adam Cain, Adrian Bolton, Stephen Gaï¬ ney, Helen King, Demis Hassabis, Shane Legg and Stig Petersen | 1612.03801#1 | 1612.03801 | [
"1605.02097"
]
|
|
1612.03801#1 | DeepMind Lab | November 8, 2021 # Abstract DeepMind Lab is a ï¬ rst-person 3D game platform designed for research and development of general artiï¬ cial intelligence and machine learning systems. DeepMind Lab can be used to study how autonomous artiï¬ cial agents may learn complex tasks in large, partially observed, and visually diverse worlds. DeepMind Lab has a simple and ï¬ exible API enabling creative task-designs and novel AI-designs to be explored and quickly iterated upon. It is powered by a fast and widely recognised game engine, and tailored for eï¬ ective use by the research community. # Introduction General intelligence measures an agentâ s ability to achieve goals in a wide range of environments (Legg and Hutter, 2007). The only known examples of general- purpose intelligence arose from a combination of evolution, development, and learn- ing, grounded in the physics of the real world and the sensory apparatus of animals. An unknown, but potentially large, fraction of animal and human intelligence is a direct consequence of the perceptual and physical richness of our environment, and is unlikely to arise without it (e.g. Locke, 1690; Hume, 1739). One option is to di- rectly study embodied intelligence in the real world itself using robots (e.g. Brooks, 1990; Metta et al., 2008). However, progress on that front will always be hindered by the too-slow passing of real time and the expense of the physical hardware involved. Realistic virtual worlds on the other hand, if they are suï¬ ciently detailed, can get the best of both, combining perceptual and physical near-realism with the speed and ï¬ exibility of software. Previous eï¬ orts to construct realistic virtual worlds as platforms for AI research have been stymied by the considerable engineering involved. To ï¬ ll the gap, we present DeepMind Lab. DeepMind Lab is a ï¬ rst-person 3D game platform built on top of id softwareâ s Quake III Arena (id software, 1999) engine. The world is ren- dered with rich science ï¬ ction-style visuals. Actions are to look around and move in 3D. Example tasks include navigation in mazes, collecting fruit, traversing dangerous passages and avoiding falling oï¬ | 1612.03801#0 | 1612.03801#2 | 1612.03801 | [
"1605.02097"
]
|
1612.03801#2 | DeepMind Lab | cliï¬ s, bouncing through space using launch pads to move between platforms, laser tag, quickly learning and remembering random pro- cedurally generated environments, and tasks inspired by Neuroscience experiments. DeepMind Lab is already a major research platform within DeepMind. In particular, 1 it has been used to develop asynchronous methods for reinforcement learning (Mnih et al., 2016), unsupervised auxiliary tasks (Jaderberg et al., 2016), and to study navigation (Mirowski et al., 2016). DeepMind Lab may be compared to other game-based AI research platforms emphasising pixels-to-actions autonomous learning agents. The Arcade Learning Environment (Atari) (Bellemare et al., 2012), which we have used extensively at DeepMind, is neither 3D nor ï¬ rst-person. Among 3D platforms for AI research, DeepMind Lab is comparable to others like VizDoom (Kempka et al., 2016) and Minecraft (Johnson et al., 2016; Tessler et al., 2016). However, it pushes the envelope beyond what is possible in those platforms. In comparison, DeepMind Lab has considerably richer visuals and more naturalistic physics. The action space allows for ï¬ ne-grained pointing in a fully 3D world. Compared to VizDoom, DeepMind Lab is more removed from its origin in a ï¬ rst-person shooter genre video game. This work is diï¬ erent and complementary to other recent projects which run as plugins to access internal content in the Unreal engine (Qiu and Yuille, 2016; Lerer et al., 2016). Any of these systems can be used to generate static datasets for computer vision as described e.g., in Mahendran et al. (2016); Richter et al. (2016). | 1612.03801#1 | 1612.03801#3 | 1612.03801 | [
"1605.02097"
]
|
1612.03801#3 | DeepMind Lab | Artiï¬ cial general intelligence (AGI) research in DeepMind Lab emphasises 3D vi- sion from raw pixel inputs, ï¬ rst-person (egocentric) viewpoints, ï¬ ne motor dexterity, navigation, planning, strategy, time, and fully autonomous agents that must learn for themselves what tasks to perform by exploration of their environment. All these factors make learning diï¬ cult. Each are considered frontier research questions on their own. Putting them all together in one platform, as we have, is a signiï¬ cant challenge for the ï¬ eld. # DeepMind Lab Research Platform DeepMind Lab is built on top of id softwareâ s Quake III Arena (id software, 1999) engine using the ioquake3 (Nussel et al., 2016) version of the codebase, which is actively maintained by enthusiasts in the open source community. DeepMind Lab also includes tools from q3map2 (GtkRadiant, 2016) and bspc (bspc, 2016) for level generation. The bot scripts are based on code from the OpenArena (OpenArena, 2016) project. # Tailored for machine learning A custom set of assets were created to give the platform a unique and stylised look and feel, with a focus on rich visuals tailored for machine learning. A reinforcement learning API has been built on top of the game engine, providing agents with complex observations and accepting a rich set of actions. The interaction with the platform is lock-stepped, with the engine stepped for- ward one simulation step (or multiple with repeated actions, if desired) at a time, according to a user-speciï¬ ed frame rate. Thus, the game is eï¬ ectively paused after an observation is provided until an agent provides the next action(s) to take. # Observations At each step, the engine provides reward, pixel-based observations and, optionally, velocity information (ï¬ gure 1): 2 ? ° kK cabal Agent Velocity Figure 1: Observations available to the agent. In our experience, reward and pixels are suï¬ cient to train an agent, whereas depth and velocity information can be useful for further analysis. | 1612.03801#2 | 1612.03801#4 | 1612.03801 | [
"1605.02097"
]
|
1612.03801#4 | DeepMind Lab | rN rotate up/down rotate left/right â -â .. â back strafe â Â¥ Figure 2: The action space includes movement in three dimensions and look direction around two axes. 1. The reward signal is a scalar value that is eï¬ ectively the score of each level. 2. The platform provides access to the raw pixels as rendered by the game engine from the playerâ s ï¬ rst-person perspective, formatted as RGB pixels. There is also an RGBD format, which additionally exposes per-pixel depth values, mimicking the range sensors used in robotics and biological stereo-vision. | 1612.03801#3 | 1612.03801#5 | 1612.03801 | [
"1605.02097"
]
|
1612.03801#5 | DeepMind Lab | 3. For certain research applications the agentâ s translational and angular velocities may be useful. These are exposed as two separate three-dimensional vectors. # Actions Agents can provide multiple simultaneous actions to control movement (forward/back, strafe left/right, crouch, jump), looking (up/down, left/right) and tagging (in laser tag levels with opponent bots), as illustrated in ï¬ gure 2. 3 # Example levels Figures 7 and 8 show a gallery of screen shots from the ï¬ rst-person perspective of the agent. The levels can be divided into four categories: 1. Simple fruit gathering levels with a static map (seekavoid_arena_01 and stairway_to_melon). The goal of these levels is to collect apples (small posi- tive reward) and melons (large positive reward) while avoiding lemons (small negative reward). | 1612.03801#4 | 1612.03801#6 | 1612.03801 | [
"1605.02097"
]
|
1612.03801#6 | DeepMind Lab | 2. Navigation levels with a static map layout (nav_maze_static_0{1, 2, 3} and nav_maze_random_goal_0{1, 2, 3}). These levels test the agentâ s ability to ï¬ nd their way to a goal in a ï¬ xed maze that remains the same across episodes. The starting location is random. In the random goal variant, the location of the goal changes in every episode. The optimal policy is to ï¬ nd the goalâ s location at the start of each episode and then use long-term knowledge of the maze layout to return to it as quickly as possible from any location. The static variant is simpler in that the goal location is always ï¬ xed for all episodes and only the agentâ s starting location changes so the optimal policy does not require the ï¬ rst step of exploring to ï¬ nd the current goal location. The speciï¬ c layouts are shown in ï¬ gure 3. 3. Procedurally-generated navigation levels requiring eï¬ ective exploration of a new maze generated on-the-ï¬ y at the start of each episode (random_maze). These levels test the agentâ s ability to explore a totally new environment. The optimal policy would begin by exploring the maze to rapidly learn its layout and then exploit that knowledge to repeatedly return to the goal as many times as possible before the end of the episode (three minutes). 4. Laser-tag levels requiring agents to wield laser-like science ï¬ ction gadgets to tag bots controlled by the gameâ s in-built AI (lt_horseshoe_color, lt_chasm, lt_hallway_slope, and lt_space_bounce_hard). A reward of 1 is delivered whenever the agent tags a bot by reducing its shield to 0. These levels approx- imate the usual gameplay from Quake III Arena. In lt_hallway_slope there is a sloped arena, requiring the agent to look up and down. In lt_chasm and lt_space_bounce_hard there are pits that the agent must jump over and avoid falling into. In lt_horseshoe_color and lt_space_bounce_hard, the colours and textures of the bots are randomly generated at the start of each episode. This prevents agents from relying on colour for bot detection. | 1612.03801#5 | 1612.03801#7 | 1612.03801 | [
"1605.02097"
]
|
1612.03801#7 | DeepMind Lab | These levels test aspects of ï¬ ne-control (for aiming), planning (to anticipate where bots are likely to move), strategy (to control key areas of the map such as gadget spawn points), and robustness to the substantial visual complexity arising from the large numbers of independently moving objects (gadget projectiles and bots). # Technical Details The original game engine is written in C and, to ensure compatibility with future changes to the engine, it has only been modiï¬ ed where necessary. DeepMind Lab provides a simple C API and ships with Python bindings. 4 Figure 3: Top-down views of static maze levels. Left: nav_maze_static_01, middle: nav_maze_static_02 and right: nav_maze_static_03. The platform includes an extensive level API, written in Lua, to allow custom level creation and mechanics. | 1612.03801#6 | 1612.03801#8 | 1612.03801 | [
"1605.02097"
]
|
1612.03801#8 | DeepMind Lab | This approach has resulted in a highly ï¬ exible platform with minimal changes to the original game engine. DeepMind Lab supports Linux and has been tested on several major distributions. # API for agents and humans The engine can be run either in a window, or it can be run headless for higher performance and support for non-windowed environments like a remote terminal. Rendering uses OpenGL and can make use of either a GPU or a software renderer. A DeepMind Lab instance is initialised with the userâ s settings for level name, screen resolution and frame rate. After initialisation a simple RL-style API is fol- lowed to interact with the environment, as per ï¬ gure 4. 1 # Construct and start the environment . 2 lab = deepmind_lab . Lab ( â seekavoid_arena_01 â , [ â RGB_INTERLACED â ]) 3 lab . reset () 4 5 # Create all - zeros vector for actions . 6 action = np . zeros ([7] , dtype = np . intc ) 7 8 # Advance the environment 4 frames while executing the action . 9 reward = env . step ( action , num_steps =4) 10 11 # Retrieve the observations of the environment in its new state . 12 obs = env . observations () 13 rgb_i = obs [ â RGB_INTERLACED â ] 14 assert rgb_i . shape == (240 , 320 , 3) # Figure 4: Python API example. # Level generation Levels for DeepMind Lab are Quake III Arena levels. They are packaged into .pk3 ï¬ les (which are ZIP ï¬ les) and consist of a number of components, including level geometry, navigation information and textures. DeepMind Lab includes tools to generate maps from .map ï¬ les. These can be cumbersome to edit by hand, but a variety of level editors are freely available, e.g. 5 GtkRadiant (GtkRadiant, 2016). In addition to built-in and user-provided levels, the platform oï¬ ers Text Levels, which are simple, human-readable text ï¬ les, to specify walls, spawn points and other game mechanics as shown in the example in ï¬ gure 5. Refer to ï¬ gure 6 for a render of the generated level. | 1612.03801#7 | 1612.03801#9 | 1612.03801 | [
"1605.02097"
]
|
1612.03801#9 | DeepMind Lab | 1 map = [[ 2 ************** ******* 3 * *** * 4 ** *** I 5 ***** * 6 ***** *** ******* 7 ***** 8 ***** ****** 9 ****** H ******* 10 * I P * 11 ************** 12 ]] Figure 5: Example text level speciï¬ cation, where â *â is a wall piece, â Pâ is a spawn point and â Hâ and â Iâ are doors. Figure 6: A level with the layout generated from the text in ï¬ gure 5. In the Lua-based level API each level can be customised further with logic for bots, item pickups, custom observations, level restarts, reward schemes, in-game messages and many other aspects. # Results and Performance Tables 1 and 2 show the platformâ s performance at diï¬ erent resolutions for two typical levels included with the platform. The frame rates listed were computed by connecting an agent performing random actions via the Python API. This agent has insigniï¬ cant overhead so the results are dominated by engine simulation and rendering times. 6 The benchmarks were run on a Linux desktop with a 6-core Intel Xeon 3.50GHz CPU and an NVIDIA Quadro K600 GPU. CPU RGB RGBD RGB RGBD 84 x 84 199.7 160 x 120 86.8 320 x 240 27.3 Table 1: Frame rate (frames/second) on nav_maze_static_01 level. CPU RGB RGBD RGB RGBD 84 x 84 286.7 160 x 120 237.7 320 x 240 82.2 Table 2: Frame rate (frames/second) on lt_space_bounce_hard level. | 1612.03801#8 | 1612.03801#10 | 1612.03801 | [
"1605.02097"
]
|
1612.03801#10 | DeepMind Lab | Machine learning results from early versions of the DeepMind Lab platform can be found in Mnih et al. (2016); Jaderberg et al. (2016); Mirowski et al. (2016). # Conclusion DeepMind Lab enables research in a 3D world with rich science ï¬ ction visuals and game-like physics. DeepMind Lab facilitates creative task development. A wide range of environments, tasks, and intelligence tests can be built with it. We are excited to see what the research community comes up with. # Acknowledgements This work would not have been possible without the support of DeepMind and our many colleagues there who have helped mature the platform. In particular we would like to thank Thomas Köppe, Hado van Hasselt, Volodymyr Mnih, Dharshan Ku- maran, Timothy Lillicrap, Raia Hadsell, Andrea Banino, Piotr Mirowski, Antonio Garcia, Timo Ewalds, Colin Murdoch, Chris Apps, Andreas Fidjeland, Max Jader- berg, Wojtek Czarnecki, Georg Ostrovski, Audrunas Gruslys, David Reichert, Tim Harley and Hubert Soyer. | 1612.03801#9 | 1612.03801#11 | 1612.03801 | [
"1605.02097"
]
|
1612.03801#11 | DeepMind Lab | 7 # References Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environment: An evaluation platform for general agents. Journal of Ar- tiï¬ cial Intelligence Research, 2012. Rodney A Brooks. Elephants donâ t play chess. Robotics and autonomous systems, 6 (1):3â 15, 1990. bspc. bspc, 2016. URL https://github.com/TTimo/bspc. GtkRadiant. Gtkradiant, 2016. URL http://icculus.org/gtkradiant/. | 1612.03801#10 | 1612.03801#12 | 1612.03801 | [
"1605.02097"
]
|
1612.03801#12 | DeepMind Lab | David Hume. Treatise on human nature. 1739. id software. Quake3, 1999. URL https://github.com/id-Software/ Quake-III-Arena. Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, and Koray Kavukcuoglu. Reinforcement learning with unsu- pervised auxiliary tasks. arXiv preprint arXiv:1611.05397, 2016. Matthew Johnson, Katja Hofmann, Tim Hutton, and David Bignell. The malmo platform for artiï¬ cial intelligence experimentation. In International joint confer- ence on artiï¬ cial intelligence (IJCAI), 2016. | 1612.03801#11 | 1612.03801#13 | 1612.03801 | [
"1605.02097"
]
|
1612.03801#13 | DeepMind Lab | MichaÅ Kempka, Marek Wydmuch, Grzegorz Runc, Jakub Toczek, and Wojciech JaÅ kowski. Vizdoom: A doom-based ai research platform for visual reinforcement learning. arXiv preprint arXiv:1605.02097, 2016. Shane Legg and Marcus Hutter. Universal intelligence: A deï¬ nition of machine intelligence. Minds and Machines, 17(4):391â 444, 2007. Adam Lerer, Sam Gross, and Rob Fergus. | 1612.03801#12 | 1612.03801#14 | 1612.03801 | [
"1605.02097"
]
|
1612.03801#14 | DeepMind Lab | Learning physical intuition of block towers by example. arXiv preprint arXiv:1603.01312, 2016. John Locke. An essay concerning human understanding. 1690. A Mahendran, H Bilen, JF Henriques, and A Vedaldi. Researchdoom and cocodoom: Learning computer vision with games. arXiv preprint arXiv:1610.02431, 2016. Giorgio Metta, Giulio Sandini, David Vernon, Lorenzo Natale, and Francesco Nori. The icub humanoid robot: an open platform for research in embodied cognition. In Proceedings of the 8th workshop on performance metrics for intelligent systems, pages 50â 56. ACM, 2008. | 1612.03801#13 | 1612.03801#15 | 1612.03801 | [
"1605.02097"
]
|
1612.03801#15 | DeepMind Lab | Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andy Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, et al. Learning to navigate in complex environments. arXiv preprint arXiv:1611.03673, 2016. Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timo- thy P Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. arXiv preprint arXiv:1602.01783, 2016. | 1612.03801#14 | 1612.03801#16 | 1612.03801 | [
"1605.02097"
]
|
1612.03801#16 | DeepMind Lab | 8 Ludwig Nussel, Thilo Schulz, Tim Angus, Tony J White, and Zachary J Slater. ioquake3, 2016. URL https://github.com/ioquake/ioq3. OpenArena. The openarena project, 2016. URL http://www.openarena.ws. Weichao Qiu and Alan Yuille. Unrealcv: Connecting computer vision to unreal engine. arXiv preprint arXiv:1609.01326, 2016. Stephan R Richter, Vibhav Vineet, Stefan Roth, and Vladlen Koltun. Playing for data: Ground truth from computer games. In European Conference on Computer Vision, pages 102â 118. Springer, 2016. Chen Tessler, Shahar Givony, Tom Zahavy, Daniel J Mankowitz, and Shie Mannor. A deep hierarchical approach to lifelong learning in minecraft. arXiv preprint arXiv:1604.07255, 2016. | 1612.03801#15 | 1612.03801#17 | 1612.03801 | [
"1605.02097"
]
|
1612.03801#17 | DeepMind Lab | 9 It_chasm It_hallway_slope It_space_bounce_hard nav_maze*01 Figure 7: Example images from the agentâ s egocentric viewpoint from several example DeepMind Lab levels. 10 nav_maze*02 nav_maze*03 stairway_to_melon Figure 8: Example images from the agentâ s egocentric viewpoint from several example DeepMind Lab levels. 11 | 1612.03801#16 | 1612.03801 | [
"1605.02097"
]
|
|
1612.03969#0 | Tracking the World State with Recurrent Entity Networks | 2017 7 1 0 2 y a M 0 1 ] L C . s c [ 3 v 9 6 9 3 0 . 2 1 6 1 : v i X r a Published as a conference paper at ICLR 2017 # TRACKING THE WORLD STATE WITH RECURRENT ENTITY NETWORKS # Mikael Henaff1,2, Jason Weston1, Arthur Szlam1, Antoine Bordes1 and Yann LeCun1,2 1Facebook AI Research 2Courant Institute, New York University {mbh305}@nyu.edu, {jase,aszlam,abordes,yann}@fb.com # ABSTRACT We introduce a new model, the Recurrent Entity Network (EntNet). It is equipped with a dynamic long-term memory which allows it to maintain and update a rep- resentation of the state of the world as it receives new data. For language un- derstanding tasks, it can reason on-the-ï¬ y as it reads text, not just when it is required to answer a question or respond as is the case for a Memory Network (Sukhbaatar et al., 2015). Like a Neural Turing Machine or Differentiable Neural Computer (Graves et al., 2014; 2016) it maintains a ï¬ xed size memory and can learn to perform location and content-based read and write operations. However, unlike those models it has a simple parallel architecture in which several memory locations can be updated simultaneously. The EntNet sets a new state-of-the-art on the bAbI tasks, and is the ï¬ rst method to solve all the tasks in the 10k training examples setting. We also demonstrate that it can solve a reasoning task which requires a large number of supporting facts, which other methods are not able to solve, and can generalize past its training horizon. It can also be practically used on large scale datasets such as Childrenâ s Book Test, where it obtains competitive performance, reading the story in a single pass. # INTRODUCTION The essence of intelligence is the ability to predict. An intelligent agent must be able to predict unobserved facts about their environment from limited percepts (visual, auditory, textual, or other- wise), combined with their knowledge of the past. In order to reason and plan, they must be able to predict how an observed event or action will affect the state of the world. | 1612.03969#1 | 1612.03969 | [
"1503.01007"
]
|
|
1612.03969#1 | Tracking the World State with Recurrent Entity Networks | Arguably, the ability to maintain an estimate of the current state of the world, combined with a forward model of how the world evolves, is a key feature of intelligent agents. A natural way for an agent to represent the world is to maintain a set of high-level concepts or entities together with their properties, which are updated as new information is received. For example, if a percept is the textual description of an event, such as â John walks out of the kitchenâ , the agent should learn to update its estimate of Johnâ s location, as well as the list (and number) of people present in each room. If John was carrying a bag, the location of the bag and the list of objects in the kitchen must also be updated. When we read a story, each sentence we read or hear causes us to update our internal representation of the current state of the world within the story. | 1612.03969#0 | 1612.03969#2 | 1612.03969 | [
"1503.01007"
]
|
1612.03969#2 | Tracking the World State with Recurrent Entity Networks | The ï¬ ow of the story is captured by the evolution of this state of the world. At any given time, an agent typically receives limited information about the state of the world, and should therefore be able to infer new information through partial observation. In this paper, we investigate this problem through a simple story understanding scenario, in which the agent is given a sequence of textual statements and events, and then given another series of statements about the ï¬ nal state of the world. If the second series of statements is given in the form of questions about the ï¬ nal state of the world together with their correct answers, the agent should be able to learn from them and its performance can be measured by the accuracy of its answers. 1 | 1612.03969#1 | 1612.03969#3 | 1612.03969 | [
"1503.01007"
]
|
1612.03969#3 | Tracking the World State with Recurrent Entity Networks | Published as a conference paper at ICLR 2017 Even with this weak form of supervision, the system may learn basic dynamical constraints about the world. For example, it may learn that a person or object cannot be in two locations at the same time, or may learn simple update rules such as incrementing and decrementing the number of persons or objects in a room. It may also learn basic rules of approximate (logical) inference, such as the fact that objects belonging to the same category tend to have similar properties (light objects can be carried over from rooms to rooms for instance). We propose to handle this scenario with a new kind of memory-augmented neural network that uses a distributed memory and processor architecture: the Recurrent Entity Network (EntNet). The model consists of a ï¬ xed number of dynamic memory cells, each containing a vector key wj and a vector value (or content) hj. Each cell is associated with its own â processorâ , a simple gated recurrent network that may update the cell value given an input. If each cell learns to represent a concept or entity in the world, one can imagine a gating mechanism that, based on the key and con- tent of the memory cells, will only modify the cells that concern the entities mentioned in the input. In the current version of the model, there is no direct interaction between the memory cells, hence the system can be seen as multiple identical processors functioning in parallel, with distributed lo- cal memory. Alternatively, the EntNet can be seen as a bank of gated RNNs (all sharing the same parameters), whose hidden states correspond to latent concepts and attributes, and whose parame- ters describe the laws of the world according to which the attributes of objects are updated. The sharing of these parameters reï¬ ects an invariance of these laws across object instances, similarly to how the weight tying scheme in a CNN reï¬ ects an invariance of image statistics across locations. Their hidden state is updated only when new information relevant to their concept is received, and remains otherwise unchanged. The keys used in the addressing/gating mechanism also correspond to concepts or entities, but are modiï¬ ed only during learning, not during inference. The EntNet is able to solve all 20 bAbI question-answering tasks (Weston et al., 2015), a popular benchmark of story understanding, which to our knowledge sets a new state-of-the-art. | 1612.03969#2 | 1612.03969#4 | 1612.03969 | [
"1503.01007"
]
|
1612.03969#4 | Tracking the World State with Recurrent Entity Networks | Our experi- ments also indicate that the model indeed maintains an internal representation of the simpliï¬ ed world in which the stories take place, and that the model does not limit itself to storing the aspects of the world required to answer a speciï¬ c question. We also introduce a new reasoning task which, unlike the bAbI tasks, requires a model to use a large number of supporting facts to answer the question, and show that the EntNet outperforms both LSTMs and Memory Networks (Sukhbaatar et al., 2015) by a signiï¬ cant margin. It is also able to generalize to sequences longer than those seen during training. Finally, our model also obtains competitive results on the Childrens Book Test (Hill et al., 2016), and performs best among models that read the text in a single pass before receiving knowledge of the question. # 2 MODEL Our model is designed to process data in sequential form, and consists of three main parts: an input encoder, a dynamic memory and an output layer, which we now describe in detail. We developed it in the context of question answering on short stories where the inputs are word sequences, but the model could be adapted to many other contexts. 2.1 INPUT ENCODER The encoding layer summarizes an element of the input sequence with a vector of ï¬ xed length. Typically the input element at time t is a sequence of words, e.g. a sentence or window of words. One is free to choose the encoding module to be any standard sequence encoder, which is an active area of research. Typical choices include a bag-of-words (BoW) representation or the ï¬ nal state of a recurrent neural net (RNN) run over the sequence. In this work, we use a simple encoder consisting of a learned multiplicative mask followed by a summation. More precisely, let the input at time t be a sequence of words with embeddings {e1, ..., ek}. The vector representation of this input is then: st = X i fi â ei (1) The same set of vectors {f1, ..., fk} are used at each time step and are learned jointly with the other parameters of the model. Note that the model can choose to adopt a standard BoW representation | 1612.03969#3 | 1612.03969#5 | 1612.03969 | [
"1503.01007"
]
|
1612.03969#5 | Tracking the World State with Recurrent Entity Networks | 2 Published as a conference paper at ICLR 2017 e | key |â -(i3)â + i update |) â +, update gate i hy memory slot 2 | key tig â * update update i gate i om memory slot Figure 1: Diagram of the Recurrent Entity Networkâ s dynamic memory. Update equations 1 and 2 are represented by the module fθ, where θ is the set of trainable parameters. Equations 3 and 4 are represented by the gate, since they fullï¬ ll a similar function. by setting all weights in the multiplicative mask to 1, or can choose a positional encoding model as used in (Sukhbaatar et al., 2015). 2.2 DYNAMIC MEMORY The dynamic memory is a gated recurrent network with a (partially) block structured weight tying scheme. We divide the hidden states of the network into blocks h1, ..., hm; the full hidden state is the concatenation of the hj. In the experiments below, m is of the order of 5 to 20, and each block hj is of the order of 20 to 100 units. At each time step t, the content of the hidden states {hj} (which we will call the jth memory) are updated using a set of key vectors {wj} and the encoded input st. In its most general form, the update equations of our model are given by: t wj) (2) t hj + sT gj â Ï (sT Ë hj â Ï (U hj + V wj + W st) hj â hj + gj â Ë hj hj ||hj|| (3) (4) hj â (5) Here Ï represents a sigmoid, gj is a gating function which determines how much the jth memory should be updated, and Ë hj is the new candidate value of the memory to be combined with the existing memory hj. The function Ï can be chosen from any number of activation functions, in our experiments we use either parametric ReLU non-linearities (He et al., 2015) or the identity. The matrices U, V, W are typically trainable parameters of the model, and are shared between all the blocks. They can also be ï¬ | 1612.03969#4 | 1612.03969#6 | 1612.03969 | [
"1503.01007"
]
|
1612.03969#6 | Tracking the World State with Recurrent Entity Networks | xed to certain values, such as the identity or zero, to yield a simpler model which we use in some of our experiments. 3 Published as a conference paper at ICLR 2017 The gating function gj contains two terms: a â contentâ term sT t hj which causes the gate to open for memory slots whose content matches the input, and a â locationâ term sT t wj which causes the gate to open for memory slots whose key matches the input. The ï¬ | 1612.03969#5 | 1612.03969#7 | 1612.03969 | [
"1503.01007"
]
|
1612.03969#7 | Tracking the World State with Recurrent Entity Networks | nal normalization step allows the model to forget previous information. To see this, note that since the memories lie on the unit sphere, all information is contained in their phase. Adding any vector to a given memory (other than the memory itself) will decrease the cosine distance between the original memory and the updated one. Therefore, as new information is added, old information is forgotten. 2.3 OUTPUT MODULE Whenever the model is required to produce an output, it is presented with a query vector q. Speciï¬ - cally, the output is computed using the following equations: pj = Softmax(qT hj) u = X j pjhj y = RÏ (q + Hu) (6) The matrices H and R are additional trainable parameters of the model. The output module can be viewed as a one-hop Memory Network (Sukhbaatar et al., 2015) with an additional non-linearity Ï between the internal state and the decoder matrix. If the memory slots correspond to speciï¬ c words (as we will describe in the following section) which contain the answer, p can be viewed as a distribution over potential answers and can be used to make a prediction directly or fed into a loss function, removing the need for the last two steps. The entire model (all three components described above) is trained via backpropagation through time, receiving gradients from any time steps where the reader is required to produce an output, which are then propagated through the unrolled network. # 3 MOTIVATING EXAMPLE OF OPERATION We now describe a motivating example of how our model can perform reasoning on-the-ï¬ y as it is ingesting input sequences. Let us suppose our model is reading a story, so the inputs are natural language sentences, and then it is required to answer questions about the story it has just read. Our model is free to learn the key vectors wj for each memory j. One choice the model could make is to associate a single memory (via the key) with each entity in the story. The memory slot corresponding to a person could encode that personâ s location, the objects they are carrying, or the people they are with, depending on what information is relevant for the task at hand. As new information is received indicating that objects are acquired or discarded, or the person changes location, their memory slot will change accordingly. Similarly useful updates can be made for memories corresponding to object and location entities as well. | 1612.03969#6 | 1612.03969#8 | 1612.03969 | [
"1503.01007"
]
|
1612.03969#8 | Tracking the World State with Recurrent Entity Networks | In fact, we could encode this choice of memories directly into our model, which we consider as a type of prior knowledge. By tying the weights of the key vectors with the embeddings of speciï¬ c words, we can encourage the model to record information about certain words occuring in the text which we believe to be important. For example, given a list of named entities (which could be produced by a standard tagger), we could make the model have a separate memory slot for each entity. We consider this â tiedâ variant in our experiments. | 1612.03969#7 | 1612.03969#9 | 1612.03969 | [
"1503.01007"
]
|
1612.03969#9 | Tracking the World State with Recurrent Entity Networks | Since the list of entities is independent of the training data, this variant can handle entities not seen in the training set, as long as their embeddings can be initialized in a reasonable way (such as pre-training on a larger corpus). Now, consider that the model reads the following two sentences, and the desired behavior of the gating function and update function at each memory as they are seen: â ¢ Mary picked up the ball. Mary went to the garden. 4 Published as a conference paper at ICLR 2017 As the ï¬ rst sentence st is ingested, and assuming memories encode entities, we would like the gates of the memories corresponding to both â Maryâ and â ballâ to activate. | 1612.03969#8 | 1612.03969#10 | 1612.03969 | [
"1503.01007"
]
|
1612.03969#10 | Tracking the World State with Recurrent Entity Networks | This is possible due to the location addressing term sT t wj which uses the key wj . We expect that a well trained model would learn to do this. The model would hence modify both the entry corresponding to â Maryâ to indicate that she is now carrying the ball, and also the entry corresponding to â ballâ , to indicate that it is being carried by Mary. When the second sentence is seen, we would like the model to again modify the â Maryâ entry to indicate that she is now in the garden, and also modify the â ballâ entry to reï¬ ect its new location as well. Assuming the information for â | 1612.03969#9 | 1612.03969#11 | 1612.03969 | [
"1503.01007"
]
|
1612.03969#11 | Tracking the World State with Recurrent Entity Networks | Maryâ is contained in the â ballâ memory as described before, the gate corresponding to â ballâ can activate due to the content addressing term sT t hj, even though the word â ballâ does not occur in the second sentence. As before, the gate corresponding to the â Maryâ entry can open due to the second term. If the gating function and update function have weights such that the steps above are executed, then the memory will be in a state where questions such as â Where is the ball?â or â Where is Mary?â | 1612.03969#10 | 1612.03969#12 | 1612.03969 | [
"1503.01007"
]
|
1612.03969#12 | Tracking the World State with Recurrent Entity Networks | can be answered from the values of relevant memories, without the need for further complex reasoning. # 4 RELATED WORK The EntNet is related to gated recurrent models such as the LSTM (Hochreiter & Schmidhuber, 1997) and GRU (Cho et al., 2014), which also use gates to ï¬ x or modify the information stored in the hidden state. However, these models use scalar memory cells with full interactions between them, whereas ours has separate memory slots which could be seen as groups of hidden units with tied weights in the gating and update functions. Another important difference is the content-based matching term between the input and hidden state, which is not present in these models. Our model also shares some similarities with the DNC/NTM framework of (Graves et al., 2014; 2016). There, as in our model, a block of hidden states acts as a set of read-writeable memories. On the other hand, the DNC has a relatively sophisticated controller network (such as an LSTM) which reads an input and outputs a number of interface vectors (such as keys and weightings) which are then combined via a softmax to read from and write to the external memory matrix. In contrast, our model can be viewed as a set of separate recurrent models whose hidden states store the memory slots. These hidden states are either ï¬ xed by the gates, or modiï¬ ed through a simple RNN-style update. The bulk of the reasoning is thus performed by these parallel recurrent models, rather than through a central controller. Moreover, instead of using a softmax, our model uses an independent gate for writing to each memory. Our model is similar to a Memory Network and its variants (Weston et al., 2014; Sukhbaatar et al., 2015; Chandar et al., 2016; Miller et al., 2016) in the way it produces an output using a softmax over blocks of hidden states, and our encoding layer is inspired by techniques used in those works. How- ever, Memory Networks explicitly store the entire input sequence in memory, and then sequentially update a controllerâ s hidden state via a softmax gating over the memories. In contrast, our model keeps a ï¬ xed number of blocks of hiddens as memories and updates each block with an independent gated RNN. | 1612.03969#11 | 1612.03969#13 | 1612.03969 | [
"1503.01007"
]
|
1612.03969#13 | Tracking the World State with Recurrent Entity Networks | The Dynamic Memory Network of (Xiong et al., 2016) also performs updates via a re- current model, however it links memories to input tokens and updates them sequentially rather than in parallel. The weight tying scheme and the parallel gated RNNs recall the gated graph network of (Li et al., 2015). If we interpret our work in that context, the â graphâ is just a set of vertices with no edges; our gating mechanism is also somewhat different than the one they use. The CommNN model of (Sukhbaatar et al., 2016), the Interaction Network of (?), the Neural Physics Engine of (?) and the model of (?) also use a set of parallel recurrent models with tied weights, but differ from our model in their use of inter-network communication and the lack of a gating mechanism. Finally, there is another class of recent models that have a writeable memory arranged as (un- bounded) stacks, linked lists or queues (Joulin & Mikolov, 2015; Grefenstette et al., 2015). Our model is different from these in that we use a key-value pair array instead of a stack, and in the experiments in this work, the array is of ï¬ xed size. | 1612.03969#12 | 1612.03969#14 | 1612.03969 | [
"1503.01007"
]
|
1612.03969#14 | Tracking the World State with Recurrent Entity Networks | 5 Published as a conference paper at ICLR 2017 Model MemN2N 0.09 LSTM EntNet T = 10 T = 20 0.633 0.157 0 T = 40 0.896 0.226 0 0 0 T Error 20 0 30 0 40 0 50 0.01 60 0.03 70 0.05 80 0.08 (a) (b) Table 1: a) Error of different models on the World Model Task. b) Generalization of an EntNet trained up to T = 20. All errors range from 0 to 1. # 5 EXPERIMENTS In this section we evaluate our model on three different datasets. Training details common to all experiments can be found in Appendix A. 5.1 SYNTHETIC WORLD MODEL TASK | 1612.03969#13 | 1612.03969#15 | 1612.03969 | [
"1503.01007"
]
|
1612.03969#15 | Tracking the World State with Recurrent Entity Networks | We ï¬ rst study our modelâ s properties on a toy task designed to measure the ability to keep a world model in memory. In this task two agents are initially placed randomly on an 10à 10 grid, and at each time step a randomly chosen agent either changes direction or moves ahead. After a certain number of time steps, the model is required to provide the locations of each of the agents, thus revealing its internal world model (details can be found in Appendix B). This task is challenging because the model must combine up to T â 2 supporting facts in order to answer the question correctly, and must also keep the locations of both agents in memory and update them at different times. We compared the performance of a MemN2N, LSTM and EntNet. For the MemN2N, we set the number of hops equal to T â 2 and the embedding dimension to d = 20. The EntNet had embedding dimension d = 20 and 5 memory slots, and the LSTM had 50 hidden units which resulted in it having signiï¬ cantly more parameters than the other two models. For each model, we repeated the experi- ment with 5 different initializations and reported the best performance. All models were trained with ADAM (Kingma & Ba, 2014) with initial learning rates set by grid search over {0.1, 0.01, 0.001} and divided by 2 every 10,000 updates. Table 1a shows the results. The MemN2N has the worst performance, which degrades quickly as the length of the sequence increases. The LSTM performs better, but still loses accuracy as the length of the sequence increases. In contrast, the EntNet is able to solve the task in all cases. The ability to generalize to sequences longer than those seen during training is a desirable property, which suggests that the network has learned the dynamics of the world it is trying to model. It also means the model can be trained less expensively. To study this, we trained an EntNet on variable length sequences between 1 and 20, and evaluated it on different length sequences longer than 20. Results are shown in Table 1b. We see that the model is able to achieve good performance several times past its training horizon. 5.2 BABI TASKS | 1612.03969#14 | 1612.03969#16 | 1612.03969 | [
"1503.01007"
]
|
1612.03969#16 | Tracking the World State with Recurrent Entity Networks | We next evaluate our model on the bAbI tasks, which are a collection of 20 synthetic question- answering datasets ï¬ rst introduced in (Weston et al., 2015) designed to test a wide variety of rea- soning abilities. They have since become a benchmark for memory-augmented neural networks and most of the related methods described in Section 4 have been tested on them. Performance is mea- sured using two metrics: the average error across all tasks, and the number of failed tasks (more than 5% error). We used version 1.2 of the dataset with 10k samples. 1 Training Details We used a similar training setup as (Sukhbaatar et al., 2015). All models were trained with ADAM using a learning rate of η = 0.01, which was divided by 2 every 25 epochs until 200 epochs were reached. Copying previous works (Sukhbaatar et al., 2015; Xiong et al., 2016), the capacity of the memory was limited to the most recent 70 sentences, except for task 3 which was limited to 130 sentences. Due to the high variance in model performance for some tasks, for 1Code to reproduce these experiments can be found at https://github.com/facebook/MemNN/tree/master/EntNet-babi. 6 | 1612.03969#15 | 1612.03969#17 | 1612.03969 | [
"1503.01007"
]
|
1612.03969#17 | Tracking the World State with Recurrent Entity Networks | Published as a conference paper at ICLR 2017 Table 2: Results on bAbI Tasks with 10k training samples. Task 1: 1 supporting fact 2: 2 supporting facts 3: 3 supporting facts 4: 2 argument relations 5: 3 argument relations 6: yes/no questions 7: counting 8: lists/sets 9: simple negation 10: indeï¬ nite knowledge 11: basic coreference 12: conjunction 13: compound coreference 14: time reasoning 15: basic deduction 16: basic induction 17: positional reasoning 18: size reasoning 19: path ï¬ nding 20: agentâ | 1612.03969#16 | 1612.03969#18 | 1612.03969 | [
"1503.01007"
]
|
1612.03969#18 | Tracking the World State with Recurrent Entity Networks | s motivation 31.5 54.5 43.9 0 0.8 17.1 17.8 13.8 16.4 16.6 15.2 8.9 7.4 24.2 47.0 53.6 25.5 2.2 4.3 1.5 4.4 27.5 71.3 0 1.7 1.5 6.0 1.7 0.6 19.8 0 6.2 7.5 17.5 0 49.6 1.2 0.2 39.5 0 0 0.3 2.1 0 0.8 0.1 2.0 0.9 0.3 0 0.0 0 0 0.2 0 51.8 18.6 5.3 2.3 0 0 0.4 1.8 0 0.8 0 0.6 0.3 0.2 0.2 0 0 0 0.4 0 55.1 12.0 0.8 3.9 0 0 0.3 1.1 0 0.5 0 2.4 0.0 0.0 0 0.0 0.2 0 0.2 0 45.3 4.2 2.1 0.0 0 0 0.1 4.1 0 0.3 0.2 0 0.5 0.1 0.6 0.3 0 1.3 0 0 0.2 0.5 0.3 2.3 0 Failed Tasks (> 5% error): Mean Error: 16 20.1 9 12.8 3 4.2 2 3.8 1 2.8 0 0.5 # NTM D-NTM MemN2N DNC DMN+ EntNet | 1612.03969#17 | 1612.03969#19 | 1612.03969 | [
"1503.01007"
]
|
1612.03969#19 | Tracking the World State with Recurrent Entity Networks | each task we conducted 10 runs with different initializations and picked the best model based on performance on the validation set, as it has been done in previous work. In all experiments, our model had embedding dimension size d = 100 and 20 memory slots. In Table 2 we compare our model to various other state-of-the-art models in the literature: the larger MemN2N reported in the appendix of (Sukhbaatar et al., 2015), the Dynamic Memory Network of (Xiong et al., 2016), the Dynamic Neural Turing Machine (Gulcehre et al., 2016), the Neural Turing Machine (Graves et al., 2014) and the Differentiable Neural Computer (Graves et al., 2016). Our model is able to solve all the tasks, outperforming the other models in terms of both the number of solved tasks and the average error. To analyze what kind of representations our model can learn, we conducted an additional experi- ment on Task 2 using a simple BoW sentence encoding and key vectors which were tied to entity embeddings. This was designed to make the model more interpretable, since the weight tying forces memory slots to encode information about speciï¬ c entities. 2 After training, we ran the model over a story and computed the cosine distance between Ï (Hhj) and each row ri of the decoder matrix R. This gave us a score which measures the afï¬ nity between a given memory slot and each word in the vocabulary. Table 3 shows the nearest neighboring words for each memory slot (which itself corresponds to an entity). We see that the model has indeed stored locations of all of the objects and characters in its memory slots which reï¬ ect the ï¬ nal state of the story. In particular, it has the correct answer readily stored in the memory slot of the entity being inquired about (the milk). It also has correct location information about all other non-location entities stored in the appropriate memory slots. Note that it does not store useful or correct information in the memory slots corresponding to 2For most tasks including this one, tying key vectors did not signiï¬ cantly change performance, although it hurt in a few cases (see Appendix C). Therefore we did not apply it in Table 2 | 1612.03969#18 | 1612.03969#20 | 1612.03969 | [
"1503.01007"
]
|
1612.03969#20 | Tracking the World State with Recurrent Entity Networks | 7 Published as a conference paper at ICLR 2017 Table 3: On the left, the networkâ s ï¬ nal â world modelâ after reading the story on the right. First and second nearest neighbors from each memory slot are shown, along with their cosine distance. Key 1-NN 2-NN Story hallway (0.135) football garden (0.111) milk kitchen (0.501) john garden (0.442) mary hallway (0.394) sandra daniel hallway (0.689) bedroom hallway (0.367) kitchen (0.483) kitchen garden (0.281) garden hallway (0.475) hallway dropped (0.056) took (0.011) dropped (0.027) took (0.034) kitchen (0.121) to (0.076) dropped (0.075) daniel (0.029) where (0.026) left (0.060) mary got the milk there john moved to the bedroom sandra went back to the kitchen mary travelled to the hallway john got the football there john went to the hallway john put down the football mary went to the garden john went to the kitchen sandra travelled to the hallway daniel went to the hallway mary discarded the milk where is the milk ? answer: garden locations, most likely because this task does not contain questions about locations (such as â who is in the kitchen?â ). 5.3 CHILDRENâ S BOOK TEST (CBT) We next evaluated our model on the Childrenâ s Book Test (Hill et al., 2016), which is a semantic language modeling (sentence completion) benchmark built from childrenâ s books that are freely available from Project Gutenberg 3. Models are required to read 20 consecutive sentences from a given story and use this context to ï¬ ll in a missing word from the 21st sentence. More speciï¬ cally, each sample consists of a tuple (S, q, C, a) where S is the story consisting of 20 sentences, Q is the 21st sentence with one word replaced by a special blank token, C is a set of 10 candidate answers of the same type as the missing word (for example, common nouns or named entities), and a is the true answer (which is always contained in C). | 1612.03969#19 | 1612.03969#21 | 1612.03969 | [
"1503.01007"
]
|
1612.03969#21 | Tracking the World State with Recurrent Entity Networks | It was shown in (Hill et al., 2016) that methods with limited memory such as LSTMs perform well on more frequent, syntax based words such as prepositions and verbs, being similar to human per- formance, but poorly relative to humans on more semantically meaningful words such as named entities and common nouns. Therefore, most recent methods have been evaluated on the Named En- tity and Common Noun subtasks, since they better test the ability of a model to make use of wider contextual information. Training Details We adopted the same window memory approach used in (Hill et al., 2016), where each input corresponds to a window of text from {w(iâ | 1612.03969#20 | 1612.03969#22 | 1612.03969 | [
"1503.01007"
]
|
1612.03969#22 | Tracking the World State with Recurrent Entity Networks | bâ 1/2)...wi...w(i+(bâ 1)/2)} centered at a can- didate wi â C. In our experiments we set b = 5. All models were trained using standard stochastic gradient descent (SGD) with a ï¬ xed learning rate of 0.001. We used separate input encodings for the update and gating functions, and applied a dropout rate of 0.5 to the word embedding dimensions. Key embeddings were tied to the embeddings of the candidate words, resulting in 10 hidden blocks, one per member of C. Due to the weight tying, we did not need a decoder matrix and used the distribution over candidates to directly produce a prediction, as described in Section 3. We found that a simpler version of the model worked best, with U = V = 0, W = I and Ï equal to the identity. We also removed the normalization step in this simpliï¬ ed model, which we found to hurt performance. This can be explained by the fact that the maximum frequency baseline model in (Hill et al., 2016) has performance which is signiï¬ cantly higher than random, and including the normalization step hides this useful frequency-based information. Results We draw a distinction between two setups: the single-pass setup, where the model must read the story and query in order and immediately produce an output, and the multi-pass setup, where the model can use the query to perform attention over the story. | 1612.03969#21 | 1612.03969#23 | 1612.03969 | [
"1503.01007"
]
|
1612.03969#23 | Tracking the World State with Recurrent Entity Networks | The ï¬ rst setup is more challenging # 3www.gutenberg.org 8 Published as a conference paper at ICLR 2017 Table 4: Accuracy on CBT test set. Single-pass models encode the document before seeing the query, multi-pass models have access to the query at read time. Model Kneser-Ney Language Model + cache LSTMs (context + query) Window LSTM EntNet (general) EntNet (simple) 0.439 0.418 0.436 0.484 0.616 0.577 0.560 0.582 0.540 0.588 MemNN MemNN + self-sup. Attention Sum Reader (Kadlec et al., 2016) Gated-Attention Reader (Bhuwan Dhingra & Salakhutdinov, 2016) EpiReader (Trischler et al., 2016) AoA Reader (Cui et al., 2016) NSE Adaptive Computation (Munkhdalai & Yu, 2016) 0.493 0.666 0.686 0.690 0.697 0.720 0.732 0.554 0.630 0.634 0.639 0.674 0.694 0.714 # Named Entities Common Nouns # Single Pass # Multi Pass | 1612.03969#22 | 1612.03969#24 | 1612.03969 | [
"1503.01007"
]
|
1612.03969#24 | Tracking the World State with Recurrent Entity Networks | because the model does not know beforehand which query it will be presented with, and must learn to retain information which is useful for a wide variety of potential queries. For this reason it can be viewed as a test of the modelâ s ability to construct a general-purpose representation of the current state of the story. The second setup leverages all available information, and allows the model to use knowledge of which question will be asked when it reads the story. In Table 4, we show the performance of the general EntNet, the simpliï¬ ed EntNet, as well as other single-pass models taken from (Hill et al., 2016). The general EntNet performs better than the LSTMs and n-gram model on the Named Entities Task, but lags behind on the Common Nouns task. | 1612.03969#23 | 1612.03969#25 | 1612.03969 | [
"1503.01007"
]
|
1612.03969#25 | Tracking the World State with Recurrent Entity Networks | The simpliï¬ ed EntNet outperforms all other single-pass models on both tasks, and also per- forms better than the Memory Network which does not use the self-supervision heuristic. However, there is still a performance gap when compared to more sophisticated machine comprehension mod- els, many of which perform multiple layers of attention over the story using query knowledge. The fact that the simpliï¬ ed EntNet is able to obtain decent performance is encouraging since it indicates that the model is able to build an internal representation of the story which it can then use to answer a relatively diverse set of queries. # 6 CONCLUSION Two closely related challenges in artiï¬ cial intelligence are designing models which can maintain an estimate of the state of a world with complex dynamics over long timescales, and models which can predict the forward evolution of the state of the world from partial observation. In this paper, we introduced the Recurrent Entity Network, a new model that makes a promising step towards the ï¬ | 1612.03969#24 | 1612.03969#26 | 1612.03969 | [
"1503.01007"
]
|
1612.03969#26 | Tracking the World State with Recurrent Entity Networks | rst goal. Our model is able to accurately track the world state while reading text stories, which enables it to set a new state-of-the-art on the bAbI tasks, the competitive benchmark of story understanding, by being the ï¬ rst model to solve them all. We also showed that our model is able to capture simple dynamics over long timescales, and is able to perform competitively on a real-world dataset. Although our model was able to solve all the bAbI tasks using 10k training samples, we found that performance dropped considerably when using only 1k samples (see Appendix). Most recent work on the bAbI tasks has focused on the 10k samples setting, and we would like to emphasize that solving them in the 1k samples setting remains an open problem which will require improving the sample efï¬ ciency of reasoning models, including ours. Recent works have made some progress towards the second goal of forward modeling, for instance in capturing simple physics (Lerer et al., 2016), predicting future frames in video (Mathieu et al., 2015) or responses in dialog (Weston, 2016). Although we have only applied our model to tasks | 1612.03969#25 | 1612.03969#27 | 1612.03969 | [
"1503.01007"
]
|
1612.03969#27 | Tracking the World State with Recurrent Entity Networks | 9 Published as a conference paper at ICLR 2017 with textual inputs in this work, the architecture is general and future work should investigate how to combine the EntNetâ s tracking abilities with such predictive models. # REFERENCES Bhuwan Dhingra, Hanxiao Liu, William Cohen and Salakhutdinov, Ruslan. attention readers text comprehension. http://arxiv.org/abs/1606.01549. for CoRR, abs/1606.01549, 2016. Gated- URL Chandar, Sarath, Ahn, Sungjin, Larochelle, Hugo, Vincent, Pascal, Tesauro, Gerald, and Bengio, Yoshua. Hierarchical memory networks. arXiv preprint arXiv:1605.07427, 2016. On In Pro- the properties of neural machine translation: ceedings of SSST@EMNLP 2014, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, Doha, Qatar, 25 October 2014, pp. 103â 111, 2014. URL http://aclweb.org/anthology/W/W14/W14-4012.pdf. Collobert, Ronan, Kavukcuoglu, Koray, and Farabet, Clment. Torch7: | 1612.03969#26 | 1612.03969#28 | 1612.03969 | [
"1503.01007"
]
|
1612.03969#28 | Tracking the World State with Recurrent Entity Networks | A matlab-like environment for machine learning, 2011. Cui, Yiming, Chen, Zhipeng, Wei, Si, Wang, Shijin, Liu, Ting, and Hu, Guoping. Attention- over-attention neural networks for reading comprehension. CoRR, abs/1607.04423, 2016. URL http://arxiv.org/abs/1607.04423. Graves, Alex, Wayne, Greg, and Dnihelka, Ivo. Neural Turing Machines, September 2014. URL http://arxiv.org/abs/1410.5401. | 1612.03969#27 | 1612.03969#29 | 1612.03969 | [
"1503.01007"
]
|
1612.03969#29 | Tracking the World State with Recurrent Entity Networks | Graves, Alex, Wayne, Greg, Reynolds, Malcolm, Harley, Tim, Danihelka, Ivo, Grabska-Barwi´nska, Agnieszka, Colmenarejo, Sergio G´omez, Grefenstette, Edward, Ramalho, Tiago, Agapiou, John, et al. Hybrid computing using a neural network with dynamic external memory. Nature, 2016. Grefenstette, Edward, Hermann, Karl Moritz, Suleyman, Mustafa, and Blunsom, Phil. | 1612.03969#28 | 1612.03969#30 | 1612.03969 | [
"1503.01007"
]
|
1612.03969#30 | Tracking the World State with Recurrent Entity Networks | Learning to transduce with unbounded memory. In Advances in Neural Information Processing Systems, pp. 1828â 1836, 2015. Gulcehre, Caglar, Chandar, Sarath, Cho, Kyunghyun, and Bengio, Yoshua. Dynamic neural tur- ing machines with soft and hard addressing schemes. CoRR, abs/1607.00036, 2016. URL http://arxiv.org/abs/1607.00036. He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Delving deep into rectiï¬ ers: Surpass- ing human-level performance on imagenet classiï¬ cation. CoRR, abs/1502.01852, 2015. Hill, Felix, Bordes, Antoine, Chopra, Sumit, and Weston, Jason. | 1612.03969#29 | 1612.03969#31 | 1612.03969 | [
"1503.01007"
]
|
1612.03969#31 | Tracking the World State with Recurrent Entity Networks | The goldilocks principle: Read- ing childrenâ s books with explicit memory representations. In Proceedings of the International Conference on Learning Representations. 2016. Hochreiter, Sepp and Schmidhuber, J¨urgen. Long short-term memory. Neural Comput., 9(8): doi: 10.1162/neco.1997.9.8.1735. URL 1735â 1780, November 1997. http://dx.doi.org/10.1162/neco.1997.9.8.1735. ISSN 0899-7667. Joulin, Armand and Mikolov, Tomas. Inferring algorithmic patterns with stack-augmented recurrent nets. arXiv preprint arXiv:1503.01007, 2015. Kadlec, Rudolf, Schmid, Martin, Bajgar, Ondrej, and Kleindienst, Text under- Jan. CoRR, abs/1603.01547, 2016. URL standing with the attention sum reader network. http://arxiv.org/abs/1603.01547. Kingma, Diederik P. and Ba, Jimmy. Adam: | 1612.03969#30 | 1612.03969#32 | 1612.03969 | [
"1503.01007"
]
|
1612.03969#32 | Tracking the World State with Recurrent Entity Networks | A method for stochastic optimization. CoRR, abs/1412.6980, 2014. URL http://arxiv.org/abs/1412.6980. 10 Published as a conference paper at ICLR 2017 intuition of block tow- In Proceedings of the 33nd International Conference on Machine Learn- ers by example. ing, ICML 2016, New York City, NY, USA, June 19-24, 2016, pp. 430â 438, 2016. URL http://jmlr.org/proceedings/papers/v48/lerer16.html. Li, Yujia, Tarlow, Daniel, Brockschmidt, Marc, and Zemel, Richard S. Gated graph sequence neural networks. CoRR, abs/1511.05493, 2015. URL http://arxiv.org/abs/1511.05493. Mathieu, Micha¨el, Couprie, Camille, prediction beyond mean square http://arxiv.org/abs/1511.05440. and LeCun, Yann. CoRR, Deep multi-scale video URL error. abs/1511.05440, 2015. Miller, Alexander, Fisch, Adam, Dodge, Jesse, Karimi, Amir-Hossein, Bordes, Antoine, and We- arXiv preprint ston, Jason. Key-value memory networks for directly reading documents. arXiv:1606.03126, 2016. Munkhdalai, Tsendsuren and Yu, Hong. ral networks language comprehension. https://arxiv.org/abs/1610.06454. for Reasoning with memory augmented neu- URL CoRR, abs/1610.06454, 2016. | 1612.03969#31 | 1612.03969#33 | 1612.03969 | [
"1503.01007"
]
|
1612.03969#33 | Tracking the World State with Recurrent Entity Networks | End- In Cortes, C., Lawrence, N. D., Lee, D. D., to-end memory networks. Information Pro- Sugiyama, M., cessing Systems URL 2015. http://papers.nips.cc/paper/5846-end-to-end-memory-networks.pdf. Sukhbaatar, Sainbayar, communication with http://arxiv.org/abs/1605.07736. Szlam, Arthur, backpropagation. and Fergus, Rob. CoRR, abs/1605.07736, Learning multiagent URL 2016. Trischler, Adam, Ye, Zheng, Yuan, Xingdi, guage comprehension with the epireader. http://arxiv.org/abs/1606.02270. and Suleman, Kaheer. CoRR, abs/1606.02270, 2016. | 1612.03969#32 | 1612.03969#34 | 1612.03969 | [
"1503.01007"
]
|
1612.03969#34 | Tracking the World State with Recurrent Entity Networks | Natural lan- URL Weston, Jason. Dialog-based language learning. CoRR, abs/1604.06045, 2016. URL http://arxiv.org/abs/1604.06045. Weston, Jason, Chopra, Sumit, and Bordes, Antoine. Memory networks. CoRR, abs/1410.3916, 2014. URL http://arxiv.org/abs/1410.3916. Weston, Jason, Bordes, Antoine, Chopra, Sumit, and Mikolov, Tomas. Towards ai-complete question answering: A set of prerequisite toy tasks. CoRR, abs/1502.05698, 2015. URL http://arxiv.org/abs/1502.05698. | 1612.03969#33 | 1612.03969#35 | 1612.03969 | [
"1503.01007"
]
|
1612.03969#35 | Tracking the World State with Recurrent Entity Networks | Xiong, Caiming, Merity, Stephen, and Socher, Richard. Dynamic memory networks for visual and textual question answering. In ICML, 2016. # A TRAINING DETAILS All models were implemented using Torch (Collobert et al., 2011). In all experiments, we initialized our model by drawing weights from a Gaussian distribution with mean zero and standard deviation 0.1, except for the PReLU slopes and encoder weights which were initialized to 1. Note that the PReLU initialization is related to two of the heuristics used in (Sukhbaatar et al., 2015), namely starting training with a purely linear model, and adding non-linearities to half of the hidden units. Our initialization allows the model to choose when and how much to enter the non-linear regime. Initializing the encoder weights to 1 corresponds to beginning with a BoW encoding, which the model can then choose to modify. The initial values of the memory slots were initialized to the key values, which we found to help performance. Optimization was done with SGD or ADAM using minibatches of size 32, and gradients with norm greater than 40 were clipped to 40. A null symbol whose embedding was constrained to be zero was used to pad all sentences or windows to a ï¬ xed size. | 1612.03969#34 | 1612.03969#36 | 1612.03969 | [
"1503.01007"
]
|
1612.03969#36 | Tracking the World State with Recurrent Entity Networks | 11 Published as a conference paper at ICLR 2017 # B DETAILS OF WORLD MODEL EXPERIMENTS Two agents are initially placed at random on a 10 Ã 10 grid with 100 distinct locations {(1, 1), (1, 2), ...(9, 10), (10, 10)}. At each time step an agent is chosen at random. There are two types of actions: the agent can face a given direction, or can move a number of steps ahead. Actions are sampled until a legal action is found by either choosing to change direction or move with equal probability. If they change direction, the direction is chosen between north, south, east and west with equal probability. If they move, the number of steps is randomly chosen between 1 and 5. | 1612.03969#35 | 1612.03969#37 | 1612.03969 | [
"1503.01007"
]
|
1612.03969#37 | Tracking the World State with Recurrent Entity Networks | A legal action is one which does not place the agent off the grid. Stories are given to the network in textual form, an example of which is below. The ï¬ rst action after each agent is placed on the grid is to face a given direction. Therefore, the maximum number of actions made by one agent is T â 2. The network learns word embeddings for all words in the vocabulary such as locations, agent identiï¬ ers and actions. At question time, the model must predict the correct answer (which will always be a location) from all the tokens in the vocabulary. agent1 is at (2,8) agent1 faces-N agent2 is at (9,7) agent2 faces-N agent2 moves-2 agent2 faces-E agent2 moves-1 agent1 moves-1 agent2 faces-S agent2 moves-5 Q1: where is agent1 ? Q2: where is agent2 ? A1: (2,9) A2: (10,4) # C ADDITIONAL RESULTS ON BABI TASKS We provide some additional experiments on the bAbI tasks, in order to better understand the inï¬ u- ence of architecture, weight tying, and amount of training data. Table 5 shows results when a simple BoW encoding is used for the inputs. Here, the EntNet still performs better than a MemN2N which uses the same encoding scheme, indicating that the architecture has an important effect. Tying the key vectors to entities did not help, and hurt performance for some tasks. Table 6 shows results when using only 1k training samples. In this setting, the EntNet performs worse than the MemN2N. Table 7 shows results for the EntNet and the DNC when models are trained on all tasks jointly. We report results for the mean performance across different random seeds (20 for the DNC, 5 for the EntNet), as well as the performance for the single best seed (measured by validation error). The DNC results for mean performance were taken from the appendix of Graves et al. (2016). The DNC has better performance in terms of the best seed, but also exhibits high variation across seeds, indicating that many different runs are required to achieve good performance. The EntNet exhibits less variation across runs and is able to solve more tasks consistently. | 1612.03969#36 | 1612.03969#38 | 1612.03969 | [
"1503.01007"
]
|
1612.03969#38 | Tracking the World State with Recurrent Entity Networks | Note that Table 2 reports DNC results with joint training, since results when training on each task separately were not available. 12 Published as a conference paper at ICLR 2017 Table 5: Error rates on bAbI Tasks with inputs are encoded using BoW. â Tiedâ refers to the case where key vectors are tied with entity embeddings. Task MemN2N EntNet-tied EntNet 1: 1 supporting fact 2: 2 supporting facts 3: 3 supporting facts 4: 2 argument relations 5: 3 argument relations 6: yes/no questions 7: counting 8: lists/sets 9: simple negation 10: indeï¬ nite knowledge 11: basic coreference 12: conjunction 13: compound coreference 14: time reasoning 15: basic deduction 16: basic induction 17: positional reasoning 18: size reasoning 19: path ï¬ nding 20: agentâ s motivation 0 0.6 7 32.6 10.2 0.2 10.6 2.6 0.3 0.5 0 0 0 0.1 11.4 52.9 39.3 40.5 74.4 0 0 3.0 9.6 33.8 1.7 0 0.5 0.1 0 0 0.3 0 0.2 6.2 12.5 46.5 40.5 44.2 75.1 0 0 1.2 9.0 31.8 3.5 0 0.5 0.3 0 0 0 0 0.4 0.1 12.1 0 40.5 45.7 74.0 0 Failed Tasks (> 5%): Mean Error: 9 15.6 8 13.7 6 10.9 13 | 1612.03969#37 | 1612.03969#39 | 1612.03969 | [
"1503.01007"
]
|
1612.03969#39 | Tracking the World State with Recurrent Entity Networks | Published as a conference paper at ICLR 2017 Table 6: Results on bAbI Tasks with 1k samples. Task 1: 1 supporting fact 2: 2 supporting facts 3: 3 supporting facts 4: 2 argument relations 5: 3 argument relations 6: yes/no questions 7: counting 8: lists/sets 9: simple negation 10: indeï¬ nite knowledge 11: basic coreference 12: conjunction 13: compound coreference 14: time reasoning 15: basic deduction 16: basic induction 17: positional reasoning 18: size reasoning 19: path ï¬ nding 20: agentâ s motivation 0 8.3 40.3 2.8 13.1 7.6 17.3 10.0 13.2 15.1 0.9 0.2 0.4 1.7 0 1.3 51.0 11.1 82.8 0 0.7 56.4 69.7 1.4 4.6 30.0 22.3 19.2 31.5 15.6 8.0 0.8 9.0 62.9 57.8 53.2 46.4 8.8 90.4 2.6 Failed Tasks (> 5%): Mean Error: 11 13.9 15 29.6 # MemN2N EntNet 14 | 1612.03969#38 | 1612.03969#40 | 1612.03969 | [
"1503.01007"
]
|
1612.03969#40 | Tracking the World State with Recurrent Entity Networks | Published as a conference paper at ICLR 2017 Table 7: Results on bAbI Tasks with 10k samples and joint training on all tasks. All Seeds Best Seed DNC EntNet 0 0.4 1.8 0 0.8 0 0.6 0.3 0.2 0.2 0 0 0 0.4 0 55.1 12.0 0.8 3.9 0 2 3.8 Task 1: 1 supporting fact 2: 2 supporting facts 3: 3 supporting facts 4: 2 argument relations 5: 3 argument relations 6: yes/no questions 7: counting 8: lists/sets 9: simple negation 10: indeï¬ nite knowledge 11: basic coreference 12: conjunction 13: compound coreference 14: time reasoning 15: basic deduction 16: basic induction 17: positional reasoning 18: size reasoning 19: path ï¬ nding 20: agentâ s motivation Failed Tasks (> 5%): Mean Error: | 1612.03969#39 | 1612.03969#41 | 1612.03969 | [
"1503.01007"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.