doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1612.03969 | 24 | Training Details We used a similar training setup as (Sukhbaatar et al., 2015). All models were trained with ADAM using a learning rate of η = 0.01, which was divided by 2 every 25 epochs until 200 epochs were reached. Copying previous works (Sukhbaatar et al., 2015; Xiong et al., 2016), the capacity of the memory was limited to the most recent 70 sentences, except for task 3 which was limited to 130 sentences. Due to the high variance in model performance for some tasks, for
1Code to reproduce these experiments can be found at
https://github.com/facebook/MemNN/tree/master/EntNet-babi.
6
Published as a conference paper at ICLR 2017
Table 2: Results on bAbI Tasks with 10k training samples. | 1612.03969#24 | Tracking the World State with Recurrent Entity Networks | We introduce a new model, the Recurrent Entity Network (EntNet). It is
equipped with a dynamic long-term memory which allows it to maintain and update
a representation of the state of the world as it receives new data. For
language understanding tasks, it can reason on-the-fly as it reads text, not
just when it is required to answer a question or respond as is the case for a
Memory Network (Sukhbaatar et al., 2015). Like a Neural Turing Machine or
Differentiable Neural Computer (Graves et al., 2014; 2016) it maintains a fixed
size memory and can learn to perform location and content-based read and write
operations. However, unlike those models it has a simple parallel architecture
in which several memory locations can be updated simultaneously. The EntNet
sets a new state-of-the-art on the bAbI tasks, and is the first method to solve
all the tasks in the 10k training examples setting. We also demonstrate that it
can solve a reasoning task which requires a large number of supporting facts,
which other methods are not able to solve, and can generalize past its training
horizon. It can also be practically used on large scale datasets such as
Children's Book Test, where it obtains competitive performance, reading the
story in a single pass. | http://arxiv.org/pdf/1612.03969 | Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, Yann LeCun | cs.CL | null | ICLR 2017 | cs.CL | 20161212 | 20170510 | [
{
"id": "1503.01007"
},
{
"id": "1606.03126"
},
{
"id": "1605.07427"
}
] |
1612.03651 | 25 | Pruning. Figure 2 shows the performance of our model with different sizes. We ï¬x k = d/2 and use different pruning thresholds. NPQ offers a compression rate of Ã10 compared to the full model. As the pruning becomes more agressive, the overall compression can increase up up to Ã1, 000 with little drop of performance and no additional overhead at test time. In fact, using a smaller dictionary makes the model faster at test time. We also compare with character-level Convolutional Neural Networks (CNN) (Zhang et al., 2015; Xiao & Cho, 2016). They are attractive models for text classiï¬cation because they achieve similar performance with less memory usage than linear models (Xiao & Cho, 2016). Even though fastText with the default setting uses more memory, NPQ is already on par with CNNsâ memory usage. Note that CNNs are not quantized, and it would be worth seeing how much they can be quantized with no drop of performance. Such a study is beyond the scope of this paper. Our pruning is based on the norm of the embeddings according to the guidelines of Section 3.3. Table 1 compares the ranking obtained with norms to the ranking obtained using entropy, which is commonly used in unsupervised settings Stolcke (2000). | 1612.03651#25 | FastText.zip: Compressing text classification models | We consider the problem of producing compact architectures for text
classification, such that the full model fits in a limited amount of memory.
After considering different solutions inspired by the hashing literature, we
propose a method built upon product quantization to store word embeddings.
While the original technique leads to a loss in accuracy, we adapt this method
to circumvent quantization artefacts. Our experiments carried out on several
benchmarks show that our approach typically requires two orders of magnitude
less memory than fastText while being only slightly inferior with respect to
accuracy. As a result, it outperforms the state of the art by a good margin in
terms of the compromise between memory usage and accuracy. | http://arxiv.org/pdf/1612.03651 | Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Hérve Jégou, Tomas Mikolov | cs.CL, cs.LG | Submitted to ICLR 2017 | null | cs.CL | 20161212 | 20161212 | [
{
"id": "1510.03009"
},
{
"id": "1607.01759"
},
{
"id": "1602.02830"
},
{
"id": "1602.00367"
},
{
"id": "1512.04906"
},
{
"id": "1609.04309"
},
{
"id": "1609.06753"
}
] |
1612.03928 | 25 | student NIN-thin, 0.2M WRN-16-1, 0.2M WRN-16-2, 0.7M WRN-16-1, 0.2M WRN-40-1, 0.6M WRN-16-2, 0.7M WRN-40-2, 2.2M teacher NIN-wide, 1M student 9.38 8.77 8.77 6.31 AT 8.93 7.93 8.25 5.85 F-ActT 9.05 8.51 8.62 6.24 KD 8.55 7.41 8.39 6.08 AT+KD teacher 8.33 7.51 8.01 5.71 7.28 6.31 6.58 5.23
Table 1: Activation-based attention transfer (AT) with various architectures on CIFAR-10. Error is computed as median of 5 runs with different seed. F-ActT means full-activation transfer (see §4.1.2). | 1612.03928#25 | Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer | Attention plays a critical role in human visual experience. Furthermore, it
has recently been demonstrated that attention can also play an important role
in the context of applying artificial neural networks to a variety of tasks
from fields such as computer vision and NLP. In this work we show that, by
properly defining attention for convolutional neural networks, we can actually
use this type of information in order to significantly improve the performance
of a student CNN network by forcing it to mimic the attention maps of a
powerful teacher network. To that end, we propose several novel methods of
transferring attention, showing consistent improvement across a variety of
datasets and convolutional neural network architectures. Code and models for
our experiments are available at
https://github.com/szagoruyko/attention-transfer | http://arxiv.org/pdf/1612.03928 | Sergey Zagoruyko, Nikos Komodakis | cs.CV | null | null | cs.CV | 20161212 | 20170212 | [] |
1612.03969 | 25 | Task 1: 1 supporting fact 2: 2 supporting facts 3: 3 supporting facts 4: 2 argument relations 5: 3 argument relations 6: yes/no questions 7: counting 8: lists/sets 9: simple negation 10: indeï¬nite knowledge 11: basic coreference 12: conjunction 13: compound coreference 14: time reasoning 15: basic deduction 16: basic induction 17: positional reasoning 18: size reasoning 19: path ï¬nding 20: agentâs motivation 31.5 54.5 43.9 0 0.8 17.1 17.8 13.8 16.4 16.6 15.2 8.9 7.4 24.2 47.0 53.6 25.5 2.2 4.3 1.5 4.4 27.5 71.3 0 1.7 1.5 6.0 1.7 0.6 19.8 0 6.2 7.5 17.5 0 49.6 1.2 0.2 39.5 0 0 0.3 2.1 0 0.8 0.1 2.0 0.9 0.3 0 0.0 0 0 0.2 0 51.8 18.6 5.3 2.3 0 0 0.4 1.8 0 0.8 0 0.6 0.3 0.2 0.2 0 | 1612.03969#25 | Tracking the World State with Recurrent Entity Networks | We introduce a new model, the Recurrent Entity Network (EntNet). It is
equipped with a dynamic long-term memory which allows it to maintain and update
a representation of the state of the world as it receives new data. For
language understanding tasks, it can reason on-the-fly as it reads text, not
just when it is required to answer a question or respond as is the case for a
Memory Network (Sukhbaatar et al., 2015). Like a Neural Turing Machine or
Differentiable Neural Computer (Graves et al., 2014; 2016) it maintains a fixed
size memory and can learn to perform location and content-based read and write
operations. However, unlike those models it has a simple parallel architecture
in which several memory locations can be updated simultaneously. The EntNet
sets a new state-of-the-art on the bAbI tasks, and is the first method to solve
all the tasks in the 10k training examples setting. We also demonstrate that it
can solve a reasoning task which requires a large number of supporting facts,
which other methods are not able to solve, and can generalize past its training
horizon. It can also be practically used on large scale datasets such as
Children's Book Test, where it obtains competitive performance, reading the
story in a single pass. | http://arxiv.org/pdf/1612.03969 | Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, Yann LeCun | cs.CL | null | ICLR 2017 | cs.CL | 20161212 | 20170510 | [
{
"id": "1503.01007"
},
{
"id": "1606.03126"
},
{
"id": "1605.07427"
}
] |
1612.03651 | 26 | Extreme compression. Finally, in Table 2, we explore the limit of quantized model by looking at the performance obtained for models under 64KiB. Surprisingly, even at 64KiB and 32KiB, the drop of performance is only around 0.8% and 1.7% despite a compression rate of Ã1, 000 â 4, 000.
4.2 LARGE DATASET: FLICKRTAG
In this section, we explore the limit of compression algorithms on very large datasets. Similar to Joulin et al. (2016), we consider a hashtag prediction dataset containing 312, 116 labels. We set the minimum count for words at 10, leading to a dictionary of 1, 427, 667 words. We take 10M buckets for n-grams and a hierarchical softmax. We refer to this dataset as FlickrTag.
Output encoding. We are interested in understanding how the performance degrades if the classi- ï¬er is also quantized (i.e., the matrix B in Eq. 1) and when the pruning is at the limit of the minimum number of features required to cover the full dataset. | 1612.03651#26 | FastText.zip: Compressing text classification models | We consider the problem of producing compact architectures for text
classification, such that the full model fits in a limited amount of memory.
After considering different solutions inspired by the hashing literature, we
propose a method built upon product quantization to store word embeddings.
While the original technique leads to a loss in accuracy, we adapt this method
to circumvent quantization artefacts. Our experiments carried out on several
benchmarks show that our approach typically requires two orders of magnitude
less memory than fastText while being only slightly inferior with respect to
accuracy. As a result, it outperforms the state of the art by a good margin in
terms of the compromise between memory usage and accuracy. | http://arxiv.org/pdf/1612.03651 | Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Hérve Jégou, Tomas Mikolov | cs.CL, cs.LG | Submitted to ICLR 2017 | null | cs.CL | 20161212 | 20161212 | [
{
"id": "1510.03009"
},
{
"id": "1607.01759"
},
{
"id": "1602.02830"
},
{
"id": "1602.00367"
},
{
"id": "1512.04906"
},
{
"id": "1609.04309"
},
{
"id": "1609.06753"
}
] |
1612.03928 | 26 | To verify if having at least one activation-based attention transfer loss per group in WRN transfer is important, we trained three networks with only one transfer loss per network in group1, group2 and group3 separately, and compared to a network trained with all three losses. The corresponding results were 8.11, 7.96, 7.97 (for the separate losses) and 7.93 for the combined loss (using WRN- 16-2/WRN-16-1 as teacher/student pair). Each loss provides some additional degree of attention transfer.
We also explore which attention mapping functions tend to work best using WRN-16-1 and WRN- 16-2 as student and teacher networks respectively (table 2). Interestingly, sum-based functions work very similar, and better than max-based ones. From now on, we will use sum of squared attention mapping function F 2 sum for simplicity. As for parameter β in eq. 2, it usually varies about 0.1, as we set it to 103 divided by number of elements in attention map and batch size for each layer. In case of combinining AT with KD we decay it during traning in order to simplify learning harder examples.
4.1.2 ACTIVATION-BASED AT VS. TRANSFERRING FULL ACTIVATION | 1612.03928#26 | Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer | Attention plays a critical role in human visual experience. Furthermore, it
has recently been demonstrated that attention can also play an important role
in the context of applying artificial neural networks to a variety of tasks
from fields such as computer vision and NLP. In this work we show that, by
properly defining attention for convolutional neural networks, we can actually
use this type of information in order to significantly improve the performance
of a student CNN network by forcing it to mimic the attention maps of a
powerful teacher network. To that end, we propose several novel methods of
transferring attention, showing consistent improvement across a variety of
datasets and convolutional neural network architectures. Code and models for
our experiments are available at
https://github.com/szagoruyko/attention-transfer | http://arxiv.org/pdf/1612.03928 | Sergey Zagoruyko, Nikos Komodakis | cs.CV | null | null | cs.CV | 20161212 | 20170212 | [] |
1612.03651 | 27 | Model k norm retrain Acc. Size full (uncompressed) 45.4 12 GiB Input Input Input Input+Output Input+Output 128 128 128 128 128 x x x x x x 45.0 45.3 45.5 45.2 45.4 1.7 GiB 1.8 GiB 1.8 GiB 1.5 GiB 1.5 GiB
Table 3: FlickrTag: Inï¬uence of quantizing the output matrix on performance. We use PQ for quantization with an optional normalization. We also retrain the output matrix after quantizing the input one. The ânormâ refers to the separate encoding of the magnitude and angle, while âretrainâ refers to the re-training bottom-up PQ method described in Section 3.2.
7
# Under review as a conference paper at ICLR 2017 | 1612.03651#27 | FastText.zip: Compressing text classification models | We consider the problem of producing compact architectures for text
classification, such that the full model fits in a limited amount of memory.
After considering different solutions inspired by the hashing literature, we
propose a method built upon product quantization to store word embeddings.
While the original technique leads to a loss in accuracy, we adapt this method
to circumvent quantization artefacts. Our experiments carried out on several
benchmarks show that our approach typically requires two orders of magnitude
less memory than fastText while being only slightly inferior with respect to
accuracy. As a result, it outperforms the state of the art by a good margin in
terms of the compromise between memory usage and accuracy. | http://arxiv.org/pdf/1612.03651 | Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Hérve Jégou, Tomas Mikolov | cs.CL, cs.LG | Submitted to ICLR 2017 | null | cs.CL | 20161212 | 20161212 | [
{
"id": "1510.03009"
},
{
"id": "1607.01759"
},
{
"id": "1602.02830"
},
{
"id": "1602.00367"
},
{
"id": "1512.04906"
},
{
"id": "1609.04309"
},
{
"id": "1609.06753"
}
] |
1612.03928 | 27 | 4.1.2 ACTIVATION-BASED AT VS. TRANSFERRING FULL ACTIVATION
To check if transferring information from full activation tensors is more beneï¬cial than from atten- tion maps, we experimented with FitNets-style hints using l2 losses on full activations directly, with 1 à 1 convolutional layers to match tensor shapes, and found that improvements over baseline stu- dent were minimal (see column F-ActT in table 1). For networks of the same width different depth we tried to regress directly to activations, without 1 à 1 convolutions. We also use l2 normalization before transfer losses, and decay β in eq. 2 during training as these give better performance. We ï¬nd that AT, as well as full-activation transfer, greatly speeds up convergence, but AT gives much
7
Published as a conference paper at ICLR 2017
better ï¬nal accuracy improvement than full-activation transfer (see ï¬g. 7(b), Appendix). It seems quite interesting that attention maps carry information that is more important for transfer than full activations.
attention mapping function no attention transfer Fsum F 2 F 4 F 1 sum sum error 8.77 7.99 7.93 8.09 8.08 max | 1612.03928#27 | Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer | Attention plays a critical role in human visual experience. Furthermore, it
has recently been demonstrated that attention can also play an important role
in the context of applying artificial neural networks to a variety of tasks
from fields such as computer vision and NLP. In this work we show that, by
properly defining attention for convolutional neural networks, we can actually
use this type of information in order to significantly improve the performance
of a student CNN network by forcing it to mimic the attention maps of a
powerful teacher network. To that end, we propose several novel methods of
transferring attention, showing consistent improvement across a variety of
datasets and convolutional neural network architectures. Code and models for
our experiments are available at
https://github.com/szagoruyko/attention-transfer | http://arxiv.org/pdf/1612.03928 | Sergey Zagoruyko, Nikos Komodakis | cs.CV | null | null | cs.CV | 20161212 | 20170212 | [] |
1612.03651 | 28 | 7
# Under review as a conference paper at ICLR 2017
Table 3 shows that quantizing both the âinputâ matrix (i.e., A in Eq. 1) and the âoutputâ matrix (i.e., B) does not degrade the performance compared to the full model. We use embeddings with d = 256 dimensions and use k = d/2 subquantizers. We do not use any text speciï¬c tricks, which leads to a compression factor of 8. Note that even if the output matrix is not retrained over the embeddings, the performance is only 0.2% away from the full model. As shown in the Appendix, using less subquantizers signiï¬cantly decreases the performance for a small memory gain.
Model full Entropy pruning Norm pruning Max-Cover pruning #embeddings Memory Coverage [%] 2M 11.5M 12GiB 297MiB 174MiB 305MiB 179MiB 305MiB 179MiB 73.2 88.4 2M 1M 1M 2M 70.5 70.5 61.9 88.4 1M 88.4 Accuracy 45.4 32.1 30.5 41.6 35.8 45.5 43.9 | 1612.03651#28 | FastText.zip: Compressing text classification models | We consider the problem of producing compact architectures for text
classification, such that the full model fits in a limited amount of memory.
After considering different solutions inspired by the hashing literature, we
propose a method built upon product quantization to store word embeddings.
While the original technique leads to a loss in accuracy, we adapt this method
to circumvent quantization artefacts. Our experiments carried out on several
benchmarks show that our approach typically requires two orders of magnitude
less memory than fastText while being only slightly inferior with respect to
accuracy. As a result, it outperforms the state of the art by a good margin in
terms of the compromise between memory usage and accuracy. | http://arxiv.org/pdf/1612.03651 | Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Hérve Jégou, Tomas Mikolov | cs.CL, cs.LG | Submitted to ICLR 2017 | null | cs.CL | 20161212 | 20161212 | [
{
"id": "1510.03009"
},
{
"id": "1607.01759"
},
{
"id": "1602.02830"
},
{
"id": "1602.00367"
},
{
"id": "1512.04906"
},
{
"id": "1609.04309"
},
{
"id": "1609.06753"
}
] |
1612.03928 | 28 | attention mapping function no attention transfer Fsum F 2 F 4 F 1 sum sum error 8.77 7.99 7.93 8.09 8.08 max
norm type baseline (no attention transfer) min-l2 Drucker & LeCun (1992) grad-based AT KD symmetry norm activation-based AT error 13.5 12.5 12.1 12.1 11.8 11.2
Table 2: Test error of WRN- 16-2/WRN-16-1 teacher/student pair for various attention map- ping functions. Median of 5 runs test errors are reported.
Table 3: Performance of various gradient-based attention methods on CIFAR-10. Baseline is a thin NIN network with 0.2M parame- ters (trained only on horizontally ï¬ipped augmented data and with- out batch normalization), min-l2 refers to using l2 norm of gradient w.r.t. input as regularizer, symmetry norm - to using ï¬ip invariance on gradient attention maps (see eq. 6), AT - to attention transfer, and KD - to Knowledge Distillation (both AT and KD use a wide NIN of 1M parameters as teacher).
4.1.3 GRADIENT-BASED ATTENTION TRANSFER | 1612.03928#28 | Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer | Attention plays a critical role in human visual experience. Furthermore, it
has recently been demonstrated that attention can also play an important role
in the context of applying artificial neural networks to a variety of tasks
from fields such as computer vision and NLP. In this work we show that, by
properly defining attention for convolutional neural networks, we can actually
use this type of information in order to significantly improve the performance
of a student CNN network by forcing it to mimic the attention maps of a
powerful teacher network. To that end, we propose several novel methods of
transferring attention, showing consistent improvement across a variety of
datasets and convolutional neural network architectures. Code and models for
our experiments are available at
https://github.com/szagoruyko/attention-transfer | http://arxiv.org/pdf/1612.03928 | Sergey Zagoruyko, Nikos Komodakis | cs.CV | null | null | cs.CV | 20161212 | 20170212 | [] |
1612.03969 | 28 | each task we conducted 10 runs with different initializations and picked the best model based on performance on the validation set, as it has been done in previous work. In all experiments, our model had embedding dimension size d = 100 and 20 memory slots.
In Table 2 we compare our model to various other state-of-the-art models in the literature: the larger MemN2N reported in the appendix of (Sukhbaatar et al., 2015), the Dynamic Memory Network of (Xiong et al., 2016), the Dynamic Neural Turing Machine (Gulcehre et al., 2016), the Neural Turing Machine (Graves et al., 2014) and the Differentiable Neural Computer (Graves et al., 2016). Our model is able to solve all the tasks, outperforming the other models in terms of both the number of solved tasks and the average error. | 1612.03969#28 | Tracking the World State with Recurrent Entity Networks | We introduce a new model, the Recurrent Entity Network (EntNet). It is
equipped with a dynamic long-term memory which allows it to maintain and update
a representation of the state of the world as it receives new data. For
language understanding tasks, it can reason on-the-fly as it reads text, not
just when it is required to answer a question or respond as is the case for a
Memory Network (Sukhbaatar et al., 2015). Like a Neural Turing Machine or
Differentiable Neural Computer (Graves et al., 2014; 2016) it maintains a fixed
size memory and can learn to perform location and content-based read and write
operations. However, unlike those models it has a simple parallel architecture
in which several memory locations can be updated simultaneously. The EntNet
sets a new state-of-the-art on the bAbI tasks, and is the first method to solve
all the tasks in the 10k training examples setting. We also demonstrate that it
can solve a reasoning task which requires a large number of supporting facts,
which other methods are not able to solve, and can generalize past its training
horizon. It can also be practically used on large scale datasets such as
Children's Book Test, where it obtains competitive performance, reading the
story in a single pass. | http://arxiv.org/pdf/1612.03969 | Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, Yann LeCun | cs.CL | null | ICLR 2017 | cs.CL | 20161212 | 20170510 | [
{
"id": "1503.01007"
},
{
"id": "1606.03126"
},
{
"id": "1605.07427"
}
] |
1612.03651 | 29 | Table 4: FlickrTag: Comparison of entropy pruning, norm pruning and max-cover pruning methods. We show the coverage of the test set for each method.
Pruning. Table 4 shows how the performance evolves with pruning. We measure this effect on top of a fully quantized model. The full model misses 11.6% of the test set because of missing words (some documents are either only composed of hashtags or have only rare words). There are 312, 116 labels and thus it seems reasonable to keep embeddings in the order of the million. A naive pruning with 1M features misses about 30 â 40% of the test set, leading to a signiï¬cant drop of performance. On the other hand, even though the max-coverage pruning approach was set on the train set, it does not suffer from any coverage loss on the test set. This leads to a smaller drop of performance. If the pruning is too aggressive, however, the coverage decreases signiï¬cantly.
# 5 FUTURE WORK | 1612.03651#29 | FastText.zip: Compressing text classification models | We consider the problem of producing compact architectures for text
classification, such that the full model fits in a limited amount of memory.
After considering different solutions inspired by the hashing literature, we
propose a method built upon product quantization to store word embeddings.
While the original technique leads to a loss in accuracy, we adapt this method
to circumvent quantization artefacts. Our experiments carried out on several
benchmarks show that our approach typically requires two orders of magnitude
less memory than fastText while being only slightly inferior with respect to
accuracy. As a result, it outperforms the state of the art by a good margin in
terms of the compromise between memory usage and accuracy. | http://arxiv.org/pdf/1612.03651 | Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Hérve Jégou, Tomas Mikolov | cs.CL, cs.LG | Submitted to ICLR 2017 | null | cs.CL | 20161212 | 20161212 | [
{
"id": "1510.03009"
},
{
"id": "1607.01759"
},
{
"id": "1602.02830"
},
{
"id": "1602.00367"
},
{
"id": "1512.04906"
},
{
"id": "1609.04309"
},
{
"id": "1609.06753"
}
] |
1612.03928 | 29 | 4.1.3 GRADIENT-BASED ATTENTION TRANSFER
For simplicity we use thin Network-In-Network model in these experiments, and donât apply random crop data augmentation with batch normalization, just horizontal ï¬ips augmentation. We also only use deterministic algorithms and sampling with ï¬xed seed, so reported numbers are for single run experiments. We ï¬nd that in this setting network struggles to ï¬t into training data already, and turn off weight decay even for baseline experiments. In future we plan to explore gradient-based attention for teacher-student pairs that make use of batch normalization, because it is so far unclear how batch normalization should behave in the second backpropagation step required during gradient- based attention transfer (e.g., should it contribute to batch normalization parameters, or is a separate forward propagation with ï¬xed parameters needed).
We explored the following methods:
⢠Minimizing l2 norm of gradient w.r.t. input, i.e. the double backpropagation method Drucker & LeCun (1992);
Symmetry norm on gradient attention maps (see eq. 6);
Student-teacher gradient-based attention transfer;
Student-teacher activation-based attention transfer. | 1612.03928#29 | Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer | Attention plays a critical role in human visual experience. Furthermore, it
has recently been demonstrated that attention can also play an important role
in the context of applying artificial neural networks to a variety of tasks
from fields such as computer vision and NLP. In this work we show that, by
properly defining attention for convolutional neural networks, we can actually
use this type of information in order to significantly improve the performance
of a student CNN network by forcing it to mimic the attention maps of a
powerful teacher network. To that end, we propose several novel methods of
transferring attention, showing consistent improvement across a variety of
datasets and convolutional neural network architectures. Code and models for
our experiments are available at
https://github.com/szagoruyko/attention-transfer | http://arxiv.org/pdf/1612.03928 | Sergey Zagoruyko, Nikos Komodakis | cs.CV | null | null | cs.CV | 20161212 | 20170212 | [] |
1612.03969 | 29 | To analyze what kind of representations our model can learn, we conducted an additional experi- ment on Task 2 using a simple BoW sentence encoding and key vectors which were tied to entity embeddings. This was designed to make the model more interpretable, since the weight tying forces memory slots to encode information about speciï¬c entities. 2 After training, we ran the model over a story and computed the cosine distance between Ï(Hhj) and each row ri of the decoder matrix R. This gave us a score which measures the afï¬nity between a given memory slot and each word in the vocabulary. Table 3 shows the nearest neighboring words for each memory slot (which itself corresponds to an entity). We see that the model has indeed stored locations of all of the objects and characters in its memory slots which reï¬ect the ï¬nal state of the story. In particular, it has the correct answer readily stored in the memory slot of the entity being inquired about (the milk). It also has correct location information about all other non-location entities stored in the appropriate memory slots. Note that it does not store useful or correct information in the memory slots corresponding to
2For most tasks including this one, tying key vectors did not signiï¬cantly change performance, although it hurt in a few cases (see Appendix C). Therefore we did not apply it in Table 2
7 | 1612.03969#29 | Tracking the World State with Recurrent Entity Networks | We introduce a new model, the Recurrent Entity Network (EntNet). It is
equipped with a dynamic long-term memory which allows it to maintain and update
a representation of the state of the world as it receives new data. For
language understanding tasks, it can reason on-the-fly as it reads text, not
just when it is required to answer a question or respond as is the case for a
Memory Network (Sukhbaatar et al., 2015). Like a Neural Turing Machine or
Differentiable Neural Computer (Graves et al., 2014; 2016) it maintains a fixed
size memory and can learn to perform location and content-based read and write
operations. However, unlike those models it has a simple parallel architecture
in which several memory locations can be updated simultaneously. The EntNet
sets a new state-of-the-art on the bAbI tasks, and is the first method to solve
all the tasks in the 10k training examples setting. We also demonstrate that it
can solve a reasoning task which requires a large number of supporting facts,
which other methods are not able to solve, and can generalize past its training
horizon. It can also be practically used on large scale datasets such as
Children's Book Test, where it obtains competitive performance, reading the
story in a single pass. | http://arxiv.org/pdf/1612.03969 | Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, Yann LeCun | cs.CL | null | ICLR 2017 | cs.CL | 20161212 | 20170510 | [
{
"id": "1503.01007"
},
{
"id": "1606.03126"
},
{
"id": "1605.07427"
}
] |
1612.03651 | 30 | # 5 FUTURE WORK
It may be possible to obtain further reduction of the model size in the future. One idea is to condition the size of the vectors (both for the input features and the labels) based on their frequency (Chen et al., 2015; Grave et al., 2016). For example, it is probably not worth representing the rare labels by full 256-dimensional vectors in the case of the FlickrTag dataset. Thus, conditioning the vector size on the frequency and norm seems like an interesting direction to explore in the future.
We may also consider combining the entropy and norm pruning criteria: instead of keeping the features in the model based just on the frequency or the norm, we can use both to keep a good set of features. This could help to keep features that are both frequent and discriminative, and thereby to reduce the coverage problem that we have observed.
Additionally, instead of pruning out the less useful features, we can decompose them into smaller units (Mikolov et al., 2012). For example, this can be achieved by splitting every non-discriminative word into a sequence of character trigrams. This could help in cases where training and test examples are very short (for example just a single word).
# 6 CONCLUSION | 1612.03651#30 | FastText.zip: Compressing text classification models | We consider the problem of producing compact architectures for text
classification, such that the full model fits in a limited amount of memory.
After considering different solutions inspired by the hashing literature, we
propose a method built upon product quantization to store word embeddings.
While the original technique leads to a loss in accuracy, we adapt this method
to circumvent quantization artefacts. Our experiments carried out on several
benchmarks show that our approach typically requires two orders of magnitude
less memory than fastText while being only slightly inferior with respect to
accuracy. As a result, it outperforms the state of the art by a good margin in
terms of the compromise between memory usage and accuracy. | http://arxiv.org/pdf/1612.03651 | Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Hérve Jégou, Tomas Mikolov | cs.CL, cs.LG | Submitted to ICLR 2017 | null | cs.CL | 20161212 | 20161212 | [
{
"id": "1510.03009"
},
{
"id": "1607.01759"
},
{
"id": "1602.02830"
},
{
"id": "1602.00367"
},
{
"id": "1512.04906"
},
{
"id": "1609.04309"
},
{
"id": "1609.06753"
}
] |
1612.03928 | 30 | Symmetry norm on gradient attention maps (see eq. 6);
Student-teacher gradient-based attention transfer;
Student-teacher activation-based attention transfer.
Results for various methods are shown in table 3. Interestingly, just minimizing l2 norm of gradient already works pretty well. Also, symmetry norm is one the best performing attention norms, which we plan to investigate in future on other datasets as well. We also observe that, similar to activation- based attention transfer, using gradient-based attention transfer leads to improved performance. We also trained a network with activation-based AT in the same training conditions, which resulted in the best performance among all methods. We should note that the architecture of student NIN without batch normalization is slightly different from teacher network, it doesnât have ReLU activations before pooling layers, which leads to better performance without batch normalization, and worse with. So to achieve the best performance with activation-based AT we had to train a new teacher, with batch normalization and without ReLU activations before pooling layers, and have AT losses on outputs of convolutional layers.
# 4.2 LARGE INPUT IMAGE NETWORKS
In this section we experiment with hidden activation attention transfer on ImageNet networks which have 224 Ã 224 input image size. Presumably, attention matters more in this kind of networks as spatial resolution of attention maps is higher.
8
Published as a conference paper at ICLR 2017 | 1612.03928#30 | Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer | Attention plays a critical role in human visual experience. Furthermore, it
has recently been demonstrated that attention can also play an important role
in the context of applying artificial neural networks to a variety of tasks
from fields such as computer vision and NLP. In this work we show that, by
properly defining attention for convolutional neural networks, we can actually
use this type of information in order to significantly improve the performance
of a student CNN network by forcing it to mimic the attention maps of a
powerful teacher network. To that end, we propose several novel methods of
transferring attention, showing consistent improvement across a variety of
datasets and convolutional neural network architectures. Code and models for
our experiments are available at
https://github.com/szagoruyko/attention-transfer | http://arxiv.org/pdf/1612.03928 | Sergey Zagoruyko, Nikos Komodakis | cs.CV | null | null | cs.CV | 20161212 | 20170212 | [] |
1612.03969 | 30 | 7
Published as a conference paper at ICLR 2017
Table 3: On the left, the networkâs ï¬nal âworld modelâ after reading the story on the right. First and second nearest neighbors from each memory slot are shown, along with their cosine distance.
Key 1-NN 2-NN Story hallway (0.135) football garden (0.111) milk kitchen (0.501) john garden (0.442) mary hallway (0.394) sandra daniel hallway (0.689) bedroom hallway (0.367) kitchen (0.483) kitchen garden (0.281) garden hallway (0.475) hallway dropped (0.056) took (0.011) dropped (0.027) took (0.034) kitchen (0.121) to (0.076) dropped (0.075) daniel (0.029) where (0.026) left (0.060) mary got the milk there john moved to the bedroom sandra went back to the kitchen mary travelled to the hallway john got the football there john went to the hallway john put down the football mary went to the garden john went to the kitchen sandra travelled to the hallway daniel went to the hallway mary discarded the milk where is the milk ? answer: garden | 1612.03969#30 | Tracking the World State with Recurrent Entity Networks | We introduce a new model, the Recurrent Entity Network (EntNet). It is
equipped with a dynamic long-term memory which allows it to maintain and update
a representation of the state of the world as it receives new data. For
language understanding tasks, it can reason on-the-fly as it reads text, not
just when it is required to answer a question or respond as is the case for a
Memory Network (Sukhbaatar et al., 2015). Like a Neural Turing Machine or
Differentiable Neural Computer (Graves et al., 2014; 2016) it maintains a fixed
size memory and can learn to perform location and content-based read and write
operations. However, unlike those models it has a simple parallel architecture
in which several memory locations can be updated simultaneously. The EntNet
sets a new state-of-the-art on the bAbI tasks, and is the first method to solve
all the tasks in the 10k training examples setting. We also demonstrate that it
can solve a reasoning task which requires a large number of supporting facts,
which other methods are not able to solve, and can generalize past its training
horizon. It can also be practically used on large scale datasets such as
Children's Book Test, where it obtains competitive performance, reading the
story in a single pass. | http://arxiv.org/pdf/1612.03969 | Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, Yann LeCun | cs.CL | null | ICLR 2017 | cs.CL | 20161212 | 20170510 | [
{
"id": "1503.01007"
},
{
"id": "1606.03126"
},
{
"id": "1605.07427"
}
] |
1612.03651 | 31 | # 6 CONCLUSION
In this paper, we have presented several simple techniques to reduce, by several orders of magnitude, the memory complexity of certain text classiï¬ers without sacriï¬cing accuracy nor speed. This is achieved by applying discriminative pruning which aims to keep only important features in the trained model, and by performing quantization of the weight matrices and hashing of the dictionary.
We will publish the code as an extension of the fastText library. We hope that our work will serve as a baseline to the research community, where there is an increasing interest for comparing the performance of various deep learning text classiï¬ers for a given number of parameters. Overall, compared to recent work based on convolutional neural networks, fastText.zip is often more accurate, while requiring several orders of magnitude less time to train on common CPUs, and incurring a fraction of the memory complexity.
8
# Under review as a conference paper at ICLR 2017
# REFERENCES
Alekh Agarwal, Olivier Chapelle, Miroslav Dud´ık, and John Langford. A reliable effective terascale linear learning system. Journal of Machine Learning Research, 15(1):1111â1133, 2014. | 1612.03651#31 | FastText.zip: Compressing text classification models | We consider the problem of producing compact architectures for text
classification, such that the full model fits in a limited amount of memory.
After considering different solutions inspired by the hashing literature, we
propose a method built upon product quantization to store word embeddings.
While the original technique leads to a loss in accuracy, we adapt this method
to circumvent quantization artefacts. Our experiments carried out on several
benchmarks show that our approach typically requires two orders of magnitude
less memory than fastText while being only slightly inferior with respect to
accuracy. As a result, it outperforms the state of the art by a good margin in
terms of the compromise between memory usage and accuracy. | http://arxiv.org/pdf/1612.03651 | Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Hérve Jégou, Tomas Mikolov | cs.CL, cs.LG | Submitted to ICLR 2017 | null | cs.CL | 20161212 | 20161212 | [
{
"id": "1510.03009"
},
{
"id": "1607.01759"
},
{
"id": "1602.02830"
},
{
"id": "1602.00367"
},
{
"id": "1512.04906"
},
{
"id": "1609.04309"
},
{
"id": "1609.06753"
}
] |
1612.03969 | 31 | locations, most likely because this task does not contain questions about locations (such as âwho is in the kitchen?â).
5.3 CHILDRENâS BOOK TEST (CBT)
We next evaluated our model on the Childrenâs Book Test (Hill et al., 2016), which is a semantic language modeling (sentence completion) benchmark built from childrenâs books that are freely available from Project Gutenberg 3. Models are required to read 20 consecutive sentences from a given story and use this context to ï¬ll in a missing word from the 21st sentence. More speciï¬cally, each sample consists of a tuple (S, q, C, a) where S is the story consisting of 20 sentences, Q is the 21st sentence with one word replaced by a special blank token, C is a set of 10 candidate answers of the same type as the missing word (for example, common nouns or named entities), and a is the true answer (which is always contained in C). | 1612.03969#31 | Tracking the World State with Recurrent Entity Networks | We introduce a new model, the Recurrent Entity Network (EntNet). It is
equipped with a dynamic long-term memory which allows it to maintain and update
a representation of the state of the world as it receives new data. For
language understanding tasks, it can reason on-the-fly as it reads text, not
just when it is required to answer a question or respond as is the case for a
Memory Network (Sukhbaatar et al., 2015). Like a Neural Turing Machine or
Differentiable Neural Computer (Graves et al., 2014; 2016) it maintains a fixed
size memory and can learn to perform location and content-based read and write
operations. However, unlike those models it has a simple parallel architecture
in which several memory locations can be updated simultaneously. The EntNet
sets a new state-of-the-art on the bAbI tasks, and is the first method to solve
all the tasks in the 10k training examples setting. We also demonstrate that it
can solve a reasoning task which requires a large number of supporting facts,
which other methods are not able to solve, and can generalize past its training
horizon. It can also be practically used on large scale datasets such as
Children's Book Test, where it obtains competitive performance, reading the
story in a single pass. | http://arxiv.org/pdf/1612.03969 | Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, Yann LeCun | cs.CL | null | ICLR 2017 | cs.CL | 20161212 | 20170510 | [
{
"id": "1503.01007"
},
{
"id": "1606.03126"
},
{
"id": "1605.07427"
}
] |
1612.03651 | 32 | Francis Bach, Rodolphe Jenatton, Julien Mairal, and Guillaume Obozinski. Optimization with sparsity-inducing penalties. Foundations and Trends®) in Machine Learning, 4(1):1-106, 2012.
Ashwinkumar Badanidiyuru, Baharan Mirzasoleiman, Amin Karbasi, and Andreas Krause. Stream- ing submodular maximization: Massive data summarization on the ï¬y. In SIGKDD, pp. 671â680. ACM, 2014.
Mohammad Hossein Bateni, Mohammad Taghi Hajiaghayi, and Morteza Zadimoghaddam. Sub- modular secretary problem and extensions. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, pp. 39â52. Springer, 2010.
Moses S. Charikar. Similarity estimation techniques from rounding algorithms. In STOC, pp. 380â 388, May 2002.
Welin Chen, David Grangier, and Michael Auli. Strategies for training large vocabulary neural language models. arXiv preprint arXiv:1512.04906, 2015.
Flavio Chierichetti, Ravi Kumar, and Andrew Tomkins. Max-cover in map-reduce. In International Conference on World Wide Web, 2010. | 1612.03651#32 | FastText.zip: Compressing text classification models | We consider the problem of producing compact architectures for text
classification, such that the full model fits in a limited amount of memory.
After considering different solutions inspired by the hashing literature, we
propose a method built upon product quantization to store word embeddings.
While the original technique leads to a loss in accuracy, we adapt this method
to circumvent quantization artefacts. Our experiments carried out on several
benchmarks show that our approach typically requires two orders of magnitude
less memory than fastText while being only slightly inferior with respect to
accuracy. As a result, it outperforms the state of the art by a good margin in
terms of the compromise between memory usage and accuracy. | http://arxiv.org/pdf/1612.03651 | Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Hérve Jégou, Tomas Mikolov | cs.CL, cs.LG | Submitted to ICLR 2017 | null | cs.CL | 20161212 | 20161212 | [
{
"id": "1510.03009"
},
{
"id": "1607.01759"
},
{
"id": "1602.02830"
},
{
"id": "1602.00367"
},
{
"id": "1512.04906"
},
{
"id": "1609.04309"
},
{
"id": "1609.06753"
}
] |
1612.03928 | 32 | To see how attention transfer works in ï¬netuning we choose two datasets: Caltech-UCSD Birds-200- 2011 ï¬ne-grained classiï¬cation (CUB) by Wah et al. (2011), and MIT indoor scene classiï¬cation (Scenes) by Quattoni & Torralba (2009), both containing around 5K images training images. We took ResNet-18 and ResNet-34 pretrained on ImageNet and ï¬netuned on both datasets. On CUB we crop bounding boxes, rescale to 256 in one dimension and then take a random crop. Batch nor- malization layers are ï¬xed for ï¬netuning, and ï¬rst group of residual blocks is frozen. We then took ï¬netuned ResNet-34 networks and used them as teachers for ResNet-18 pretrained on ImageNet, with F 2 sum attention losses on 2 last groups. In both cases attention transfer provides signiï¬cant im- provements, closing the gap between ResNet-18 and ResNet-34 in accuracy. On Scenes AT works as well as KD, and on CUB AT works much better, which we speculate is due to importance of | 1612.03928#32 | Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer | Attention plays a critical role in human visual experience. Furthermore, it
has recently been demonstrated that attention can also play an important role
in the context of applying artificial neural networks to a variety of tasks
from fields such as computer vision and NLP. In this work we show that, by
properly defining attention for convolutional neural networks, we can actually
use this type of information in order to significantly improve the performance
of a student CNN network by forcing it to mimic the attention maps of a
powerful teacher network. To that end, we propose several novel methods of
transferring attention, showing consistent improvement across a variety of
datasets and convolutional neural network architectures. Code and models for
our experiments are available at
https://github.com/szagoruyko/attention-transfer | http://arxiv.org/pdf/1612.03928 | Sergey Zagoruyko, Nikos Komodakis | cs.CV | null | null | cs.CV | 20161212 | 20170212 | [] |
1612.03969 | 32 | It was shown in (Hill et al., 2016) that methods with limited memory such as LSTMs perform well on more frequent, syntax based words such as prepositions and verbs, being similar to human per- formance, but poorly relative to humans on more semantically meaningful words such as named entities and common nouns. Therefore, most recent methods have been evaluated on the Named En- tity and Common Noun subtasks, since they better test the ability of a model to make use of wider contextual information.
Training Details We adopted the same window memory approach used in (Hill et al., 2016), where each input corresponds to a window of text from {w(iâbâ1/2)...wi...w(i+(bâ1)/2)} centered at a can- didate wi â C. In our experiments we set b = 5. All models were trained using standard stochastic gradient descent (SGD) with a ï¬xed learning rate of 0.001. We used separate input encodings for the update and gating functions, and applied a dropout rate of 0.5 to the word embedding dimensions. Key embeddings were tied to the embeddings of the candidate words, resulting in 10 hidden blocks, one per member of C. Due to the weight tying, we did not need a decoder matrix and used the distribution over candidates to directly produce a prediction, as described in Section 3. | 1612.03969#32 | Tracking the World State with Recurrent Entity Networks | We introduce a new model, the Recurrent Entity Network (EntNet). It is
equipped with a dynamic long-term memory which allows it to maintain and update
a representation of the state of the world as it receives new data. For
language understanding tasks, it can reason on-the-fly as it reads text, not
just when it is required to answer a question or respond as is the case for a
Memory Network (Sukhbaatar et al., 2015). Like a Neural Turing Machine or
Differentiable Neural Computer (Graves et al., 2014; 2016) it maintains a fixed
size memory and can learn to perform location and content-based read and write
operations. However, unlike those models it has a simple parallel architecture
in which several memory locations can be updated simultaneously. The EntNet
sets a new state-of-the-art on the bAbI tasks, and is the first method to solve
all the tasks in the 10k training examples setting. We also demonstrate that it
can solve a reasoning task which requires a large number of supporting facts,
which other methods are not able to solve, and can generalize past its training
horizon. It can also be practically used on large scale datasets such as
Children's Book Test, where it obtains competitive performance, reading the
story in a single pass. | http://arxiv.org/pdf/1612.03969 | Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, Yann LeCun | cs.CL | null | ICLR 2017 | cs.CL | 20161212 | 20170510 | [
{
"id": "1503.01007"
},
{
"id": "1606.03126"
},
{
"id": "1605.07427"
}
] |
1612.03651 | 33 | Flavio Chierichetti, Ravi Kumar, and Andrew Tomkins. Max-cover in map-reduce. In International Conference on World Wide Web, 2010.
Matthieu Courbariaux, Itay Hubara, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Binarized neural networks: Training neural networks with weights and activations constrained to +1 or -1. arXiv preprint arXiv:1602.02830, 2016.
M. Datar, N. Immorlica, P. Indyk, and V.S. Mirrokni. Locality-sensitive hashing scheme based on p- stable distributions. In Proceedings of the Symposium on Computational Geometry, pp. 253â262, 2004.
Scott Deerwester, Susan T Dumais, George W Furnas, Thomas K Landauer, and Richard Harshman. Indexing by latent semantic analysis. Journal of the American society for information science, 1990.
Misha Denil, Babak Shakibi, Laurent Dinh, Marc-Aurelio Ranzato, and Nando et all de Freitas. Predicting parameters in deep learning. In NIPS, pp. 2148â2156, 2013.
Uriel Feige. A threshold of ln n for approximating set cover. JACM, 45(4):634â652, 1998. | 1612.03651#33 | FastText.zip: Compressing text classification models | We consider the problem of producing compact architectures for text
classification, such that the full model fits in a limited amount of memory.
After considering different solutions inspired by the hashing literature, we
propose a method built upon product quantization to store word embeddings.
While the original technique leads to a loss in accuracy, we adapt this method
to circumvent quantization artefacts. Our experiments carried out on several
benchmarks show that our approach typically requires two orders of magnitude
less memory than fastText while being only slightly inferior with respect to
accuracy. As a result, it outperforms the state of the art by a good margin in
terms of the compromise between memory usage and accuracy. | http://arxiv.org/pdf/1612.03651 | Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Hérve Jégou, Tomas Mikolov | cs.CL, cs.LG | Submitted to ICLR 2017 | null | cs.CL | 20161212 | 20161212 | [
{
"id": "1510.03009"
},
{
"id": "1607.01759"
},
{
"id": "1602.02830"
},
{
"id": "1602.00367"
},
{
"id": "1512.04906"
},
{
"id": "1609.04309"
},
{
"id": "1609.06753"
}
] |
1612.03969 | 33 | We found that a simpler version of the model worked best, with U = V = 0, W = I and Ï equal to the identity. We also removed the normalization step in this simpliï¬ed model, which we found to hurt performance. This can be explained by the fact that the maximum frequency baseline model in (Hill et al., 2016) has performance which is signiï¬cantly higher than random, and including the normalization step hides this useful frequency-based information.
Results We draw a distinction between two setups: the single-pass setup, where the model must read the story and query in order and immediately produce an output, and the multi-pass setup, where the model can use the query to perform attention over the story. The ï¬rst setup is more challenging
# 3www.gutenberg.org
8
Published as a conference paper at ICLR 2017
Table 4: Accuracy on CBT test set. Single-pass models encode the document before seeing the query, multi-pass models have access to the query at read time. | 1612.03969#33 | Tracking the World State with Recurrent Entity Networks | We introduce a new model, the Recurrent Entity Network (EntNet). It is
equipped with a dynamic long-term memory which allows it to maintain and update
a representation of the state of the world as it receives new data. For
language understanding tasks, it can reason on-the-fly as it reads text, not
just when it is required to answer a question or respond as is the case for a
Memory Network (Sukhbaatar et al., 2015). Like a Neural Turing Machine or
Differentiable Neural Computer (Graves et al., 2014; 2016) it maintains a fixed
size memory and can learn to perform location and content-based read and write
operations. However, unlike those models it has a simple parallel architecture
in which several memory locations can be updated simultaneously. The EntNet
sets a new state-of-the-art on the bAbI tasks, and is the first method to solve
all the tasks in the 10k training examples setting. We also demonstrate that it
can solve a reasoning task which requires a large number of supporting facts,
which other methods are not able to solve, and can generalize past its training
horizon. It can also be practically used on large scale datasets such as
Children's Book Test, where it obtains competitive performance, reading the
story in a single pass. | http://arxiv.org/pdf/1612.03969 | Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, Yann LeCun | cs.CL | null | ICLR 2017 | cs.CL | 20161212 | 20170510 | [
{
"id": "1503.01007"
},
{
"id": "1606.03126"
},
{
"id": "1605.07427"
}
] |
1612.03651 | 34 | Uriel Feige. A threshold of ln n for approximating set cover. JACM, 45(4):634â652, 1998.
Tiezheng Ge, Kaiming He, Qifa Ke, and Jian Sun. Optimized product quantization for approximate nearest neighbor search. In CVPR, June 2013.
Yunchao Gong and Svetlana Lazebnik. Iterative quantization: A procrustean approach to learning binary codes. In CVPR, June 2011.
Yunchao Gong, Liu Liu, Ming Yang, and Lubomir Bourdev. Compressing deep convolutional net- works using vector quantization. arXiv preprint arXiv:1412.6115, 2014.
Edouard Grave, Armand Joulin, Moustapha Ciss´e, David Grangier, and Herv´e J´egou. Efï¬cient softmax approximation for gpus. arXiv preprint arXiv:1609.04309, 2016.
Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. In ICLR, 2016. | 1612.03651#34 | FastText.zip: Compressing text classification models | We consider the problem of producing compact architectures for text
classification, such that the full model fits in a limited amount of memory.
After considering different solutions inspired by the hashing literature, we
propose a method built upon product quantization to store word embeddings.
While the original technique leads to a loss in accuracy, we adapt this method
to circumvent quantization artefacts. Our experiments carried out on several
benchmarks show that our approach typically requires two orders of magnitude
less memory than fastText while being only slightly inferior with respect to
accuracy. As a result, it outperforms the state of the art by a good margin in
terms of the compromise between memory usage and accuracy. | http://arxiv.org/pdf/1612.03651 | Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Hérve Jégou, Tomas Mikolov | cs.CL, cs.LG | Submitted to ICLR 2017 | null | cs.CL | 20161212 | 20161212 | [
{
"id": "1510.03009"
},
{
"id": "1607.01759"
},
{
"id": "1602.02830"
},
{
"id": "1602.00367"
},
{
"id": "1512.04906"
},
{
"id": "1609.04309"
},
{
"id": "1609.06753"
}
] |
1612.03928 | 34 | 4.2.2 IMAGENET
To showcase activation-based attention transfer on ImageNet we took ResNet-18 as a student, and ResNet-34 as a teacher, and tried to improve ResNet-18 accuracy. We added only two losses in the 2 last groups of residual blocks and used squared sum attention F 2 sum. We also did not have time to tune any hyperparameters and kept them from ï¬netuning experiments. Nevertheless, ResNet-18 with attention transfer achieved 1.1% top-1 and 0.8% top-5 better validation accuracy (Table. 5 and Fig. 7(a), Appendix), we plan to update the paper with losses on all 4 groups of residual blocks. | 1612.03928#34 | Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer | Attention plays a critical role in human visual experience. Furthermore, it
has recently been demonstrated that attention can also play an important role
in the context of applying artificial neural networks to a variety of tasks
from fields such as computer vision and NLP. In this work we show that, by
properly defining attention for convolutional neural networks, we can actually
use this type of information in order to significantly improve the performance
of a student CNN network by forcing it to mimic the attention maps of a
powerful teacher network. To that end, we propose several novel methods of
transferring attention, showing consistent improvement across a variety of
datasets and convolutional neural network architectures. Code and models for
our experiments are available at
https://github.com/szagoruyko/attention-transfer | http://arxiv.org/pdf/1612.03928 | Sergey Zagoruyko, Nikos Komodakis | cs.CV | null | null | cs.CV | 20161212 | 20170212 | [] |
1612.03969 | 34 | Table 4: Accuracy on CBT test set. Single-pass models encode the document before seeing the query, multi-pass models have access to the query at read time.
Model Kneser-Ney Language Model + cache LSTMs (context + query) Window LSTM EntNet (general) EntNet (simple) 0.439 0.418 0.436 0.484 0.616 0.577 0.560 0.582 0.540 0.588 MemNN MemNN + self-sup. Attention Sum Reader (Kadlec et al., 2016) Gated-Attention Reader (Bhuwan Dhingra & Salakhutdinov, 2016) EpiReader (Trischler et al., 2016) AoA Reader (Cui et al., 2016) NSE Adaptive Computation (Munkhdalai & Yu, 2016) 0.493 0.666 0.686 0.690 0.697 0.720 0.732 0.554 0.630 0.634 0.639 0.674 0.694 0.714
# Named Entities Common Nouns | 1612.03969#34 | Tracking the World State with Recurrent Entity Networks | We introduce a new model, the Recurrent Entity Network (EntNet). It is
equipped with a dynamic long-term memory which allows it to maintain and update
a representation of the state of the world as it receives new data. For
language understanding tasks, it can reason on-the-fly as it reads text, not
just when it is required to answer a question or respond as is the case for a
Memory Network (Sukhbaatar et al., 2015). Like a Neural Turing Machine or
Differentiable Neural Computer (Graves et al., 2014; 2016) it maintains a fixed
size memory and can learn to perform location and content-based read and write
operations. However, unlike those models it has a simple parallel architecture
in which several memory locations can be updated simultaneously. The EntNet
sets a new state-of-the-art on the bAbI tasks, and is the first method to solve
all the tasks in the 10k training examples setting. We also demonstrate that it
can solve a reasoning task which requires a large number of supporting facts,
which other methods are not able to solve, and can generalize past its training
horizon. It can also be practically used on large scale datasets such as
Children's Book Test, where it obtains competitive performance, reading the
story in a single pass. | http://arxiv.org/pdf/1612.03969 | Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, Yann LeCun | cs.CL | null | ICLR 2017 | cs.CL | 20161212 | 20170510 | [
{
"id": "1503.01007"
},
{
"id": "1606.03126"
},
{
"id": "1605.07427"
}
] |
1612.03651 | 35 | Herv´e J´egou, Matthijs Douze, and Cordelia Schmid. Hamming embedding and weak geometric consistency for large scale image search. In ECCV, October 2008.
Herv´e Jegou, Matthijs Douze, and Cordelia Schmid. Product quantization for nearest neighbor search. IEEE Trans. PAMI, January 2011.
Thorsten Joachims. Text categorization with support vector machines: Learning with many relevant features. Springer, 1998.
9
# Under review as a conference paper at ICLR 2017
Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. Bag of tricks for efï¬cient text classiï¬cation. arXiv preprint arXiv:1607.01759, 2016.
Yann LeCun, John S Denker, and Sara A Solla. Optimal brain damage. NIPS, 2:598â605, 1990.
Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, and Yoshua Bengio. Neural networks with few multiplications. arXiv preprint arXiv:1510.03009, 2015.
Andrew McCallum and Kamal Nigam. A comparison of event models for naive bayes text classiï¬- cation. In AAAI workshop on learning for text categorization, 1998. | 1612.03651#35 | FastText.zip: Compressing text classification models | We consider the problem of producing compact architectures for text
classification, such that the full model fits in a limited amount of memory.
After considering different solutions inspired by the hashing literature, we
propose a method built upon product quantization to store word embeddings.
While the original technique leads to a loss in accuracy, we adapt this method
to circumvent quantization artefacts. Our experiments carried out on several
benchmarks show that our approach typically requires two orders of magnitude
less memory than fastText while being only slightly inferior with respect to
accuracy. As a result, it outperforms the state of the art by a good margin in
terms of the compromise between memory usage and accuracy. | http://arxiv.org/pdf/1612.03651 | Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Hérve Jégou, Tomas Mikolov | cs.CL, cs.LG | Submitted to ICLR 2017 | null | cs.CL | 20161212 | 20161212 | [
{
"id": "1510.03009"
},
{
"id": "1607.01759"
},
{
"id": "1602.02830"
},
{
"id": "1602.00367"
},
{
"id": "1512.04906"
},
{
"id": "1609.04309"
},
{
"id": "1609.06753"
}
] |
1612.03928 | 35 | We were not able to achieve positive results with KD on ImageNet. With ResNet-18-ResNet-34 student-teacher pair it actually hurts convergence with the same hyperparameters as on CIFAR. As it was reported that KD struggles to work if teacher and student have different architecture/depth (we observe the same on CIFAR), so we tried using the same architecture and depth for attention transfer. On CIFAR both AT and KD work well in this case and improve convergence and ï¬nal accuracy, on ImageNet though KD converges signiï¬cantly slower (we did not train until the end due to lack of computational resources). We also could not ï¬nd applications of FitNets, KD or similar methods on ImageNet in the literature. Given that, we can assume that proposed activation-based AT is the ï¬rst knowledge transfer method to be successfully applied on ImageNet.
# 5 CONCLUSIONS
We presented several ways of transferring attention from one network to another, with experimen- It would be interesting to see how attention tal results over several image recognition datasets. transfer works in cases where spatial information is more important, e.g. object detection or weakly- supervised localization, which is something that we plan to explore in the future.
Overall, we think that our interesting ï¬ndings will help further advance knowledge distillation, and understanding convolutional neural networks in general. | 1612.03928#35 | Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer | Attention plays a critical role in human visual experience. Furthermore, it
has recently been demonstrated that attention can also play an important role
in the context of applying artificial neural networks to a variety of tasks
from fields such as computer vision and NLP. In this work we show that, by
properly defining attention for convolutional neural networks, we can actually
use this type of information in order to significantly improve the performance
of a student CNN network by forcing it to mimic the attention maps of a
powerful teacher network. To that end, we propose several novel methods of
transferring attention, showing consistent improvement across a variety of
datasets and convolutional neural network architectures. Code and models for
our experiments are available at
https://github.com/szagoruyko/attention-transfer | http://arxiv.org/pdf/1612.03928 | Sergey Zagoruyko, Nikos Komodakis | cs.CV | null | null | cs.CV | 20161212 | 20170212 | [] |
1612.03651 | 36 | Lukas Meier, Sara Van De Geer, and Peter B¨uhlmann. The group lasso for logistic regression. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 70(1):53â71, 2008.
Tomas Mikolov. Statistical language models based on neural networks. In PhD thesis. VUT Brno, 2012.
Tomas Mikolov, Ilya Sutskever, Anoop Deoras, Hai-Son Le, Stefan Kombrink, and J Cernocky. Subword language modeling with neural networks. preprint, 2012.
Behnam Neyshabur and Nathan Srebro. On symmetric and asymmetric lshs for inner product search. In ICML, pp. 1926â1934, 2015.
Mohammad Norouzi and David Fleet. Cartesian k-means. In CVPR, June 2013.
Bo Pang and Lillian Lee. Opinion mining and sentiment analysis. Foundations and trends in infor- mation retrieval, 2008.
Alexandre Sablayrolles, Matthijs Douze, Herv´e J´egou, and Nicolas Usunier. How should we evalu- ate supervised hashing? arXiv preprint arXiv:1609.06753, 2016. | 1612.03651#36 | FastText.zip: Compressing text classification models | We consider the problem of producing compact architectures for text
classification, such that the full model fits in a limited amount of memory.
After considering different solutions inspired by the hashing literature, we
propose a method built upon product quantization to store word embeddings.
While the original technique leads to a loss in accuracy, we adapt this method
to circumvent quantization artefacts. Our experiments carried out on several
benchmarks show that our approach typically requires two orders of magnitude
less memory than fastText while being only slightly inferior with respect to
accuracy. As a result, it outperforms the state of the art by a good margin in
terms of the compromise between memory usage and accuracy. | http://arxiv.org/pdf/1612.03651 | Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Hérve Jégou, Tomas Mikolov | cs.CL, cs.LG | Submitted to ICLR 2017 | null | cs.CL | 20161212 | 20161212 | [
{
"id": "1510.03009"
},
{
"id": "1607.01759"
},
{
"id": "1602.02830"
},
{
"id": "1602.00367"
},
{
"id": "1512.04906"
},
{
"id": "1609.04309"
},
{
"id": "1609.06753"
}
] |
1612.03928 | 36 | Overall, we think that our interesting ï¬ndings will help further advance knowledge distillation, and understanding convolutional neural networks in general.
# REFERENCES
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473, 2014. URL http://arxiv.org/ abs/1409.0473.
Cristian Bucila, Rich Caruana, and Alexandru Niculescu-Mizil. Model compression. In KDD, pp. 535â541, 2006.
9
Published as a conference paper at ICLR 2017
Taco S. Cohen and Max Welling. Group equivariant convolutional networks. CoRR, abs/1602.07576, 2016. URL http://arxiv.org/abs/1602.07576.
Misha Denil, Loris Bazzani, Hugo Larochelle, and Nando de Freitas. Learning where to attend with deep architectures for image tracking. Neural Computation, 2012.
H. Drucker and Y LeCun. Improving generalization performance using double backpropagation. IEEE Transaction on Neural Networks, 3(6):991â997, 1992. | 1612.03928#36 | Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer | Attention plays a critical role in human visual experience. Furthermore, it
has recently been demonstrated that attention can also play an important role
in the context of applying artificial neural networks to a variety of tasks
from fields such as computer vision and NLP. In this work we show that, by
properly defining attention for convolutional neural networks, we can actually
use this type of information in order to significantly improve the performance
of a student CNN network by forcing it to mimic the attention maps of a
powerful teacher network. To that end, we propose several novel methods of
transferring attention, showing consistent improvement across a variety of
datasets and convolutional neural network architectures. Code and models for
our experiments are available at
https://github.com/szagoruyko/attention-transfer | http://arxiv.org/pdf/1612.03928 | Sergey Zagoruyko, Nikos Komodakis | cs.CV | null | null | cs.CV | 20161212 | 20170212 | [] |
1612.03969 | 36 | # Multi Pass
because the model does not know beforehand which query it will be presented with, and must learn to retain information which is useful for a wide variety of potential queries. For this reason it can be viewed as a test of the modelâs ability to construct a general-purpose representation of the current state of the story. The second setup leverages all available information, and allows the model to use knowledge of which question will be asked when it reads the story.
In Table 4, we show the performance of the general EntNet, the simpliï¬ed EntNet, as well as other single-pass models taken from (Hill et al., 2016). The general EntNet performs better than the LSTMs and n-gram model on the Named Entities Task, but lags behind on the Common Nouns task. The simpliï¬ed EntNet outperforms all other single-pass models on both tasks, and also per- forms better than the Memory Network which does not use the self-supervision heuristic. However, there is still a performance gap when compared to more sophisticated machine comprehension mod- els, many of which perform multiple layers of attention over the story using query knowledge. The fact that the simpliï¬ed EntNet is able to obtain decent performance is encouraging since it indicates that the model is able to build an internal representation of the story which it can then use to answer a relatively diverse set of queries. | 1612.03969#36 | Tracking the World State with Recurrent Entity Networks | We introduce a new model, the Recurrent Entity Network (EntNet). It is
equipped with a dynamic long-term memory which allows it to maintain and update
a representation of the state of the world as it receives new data. For
language understanding tasks, it can reason on-the-fly as it reads text, not
just when it is required to answer a question or respond as is the case for a
Memory Network (Sukhbaatar et al., 2015). Like a Neural Turing Machine or
Differentiable Neural Computer (Graves et al., 2014; 2016) it maintains a fixed
size memory and can learn to perform location and content-based read and write
operations. However, unlike those models it has a simple parallel architecture
in which several memory locations can be updated simultaneously. The EntNet
sets a new state-of-the-art on the bAbI tasks, and is the first method to solve
all the tasks in the 10k training examples setting. We also demonstrate that it
can solve a reasoning task which requires a large number of supporting facts,
which other methods are not able to solve, and can generalize past its training
horizon. It can also be practically used on large scale datasets such as
Children's Book Test, where it obtains competitive performance, reading the
story in a single pass. | http://arxiv.org/pdf/1612.03969 | Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, Yann LeCun | cs.CL | null | ICLR 2017 | cs.CL | 20161212 | 20170510 | [
{
"id": "1503.01007"
},
{
"id": "1606.03126"
},
{
"id": "1605.07427"
}
] |
1612.03651 | 37 | Jorge S´anchez and Florent Perronnin. High-dimensional signature compression for large-scale im- age classiï¬cation. In CVPR, 2011.
Anshumali Shrivastava and Ping Li. Asymmetric LSH for sublinear time maximum inner product search. In NIPS, pp. 2321â2329, 2014.
Andreas Stolcke. Entropy-based pruning of backoff language models. arXiv preprint cs/0006025, 2000.
David Talbot and Thorsten Brants. Randomized language models via perfect hash functions. In ACL, 2008.
Bart Thomee, David A Shamma, Gerald Friedland, Benjamin Elizalde, Karl Ni, Douglas Poland, Damian Borth, and Li-Jia Li. Yfcc100m: The new data in multimedia research. In Communica- tions of the ACM, 2016.
Jingdong Wang, Heng Tao Shen, Jingkuan Song, and Jianqiu Ji. Hashing for similarity search: A survey. arXiv preprint arXiv:1408.2927, 2014.
Jun Wang, Wei Liu, Sanjiv Kumar, and Shih-Fu Chang. Learning to hash for indexing big data - A survey. CoRR, abs/1509.05472, 2015. | 1612.03651#37 | FastText.zip: Compressing text classification models | We consider the problem of producing compact architectures for text
classification, such that the full model fits in a limited amount of memory.
After considering different solutions inspired by the hashing literature, we
propose a method built upon product quantization to store word embeddings.
While the original technique leads to a loss in accuracy, we adapt this method
to circumvent quantization artefacts. Our experiments carried out on several
benchmarks show that our approach typically requires two orders of magnitude
less memory than fastText while being only slightly inferior with respect to
accuracy. As a result, it outperforms the state of the art by a good margin in
terms of the compromise between memory usage and accuracy. | http://arxiv.org/pdf/1612.03651 | Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Hérve Jégou, Tomas Mikolov | cs.CL, cs.LG | Submitted to ICLR 2017 | null | cs.CL | 20161212 | 20161212 | [
{
"id": "1510.03009"
},
{
"id": "1607.01759"
},
{
"id": "1602.02830"
},
{
"id": "1602.00367"
},
{
"id": "1512.04906"
},
{
"id": "1609.04309"
},
{
"id": "1609.06753"
}
] |
1612.03928 | 37 | H. Drucker and Y LeCun. Improving generalization performance using double backpropagation. IEEE Transaction on Neural Networks, 3(6):991â997, 1992.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. CoRR, abs/1512.03385, 2015.
Geoffrey E. Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural networks. 2015.
Hugo Larochelle and Geoffrey E. Hinton. Learning to combine foveal glimpses with a third-order boltzmann machine. In J. D. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R. S. Zemel, and A. Culotta (eds.), Advances in Neural Information Processing Systems 23, pp. 1243â1251. Curran Associates, Inc., 2010.
Jimmy Ba Lei and Rich Caruana. Do deep nets really need to be deep? In NIPS, 2014.
Min Lin, Qiang Chen, and Shuicheng Yan. Network in network. CoRR, abs/1312.4400, 2013. | 1612.03928#37 | Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer | Attention plays a critical role in human visual experience. Furthermore, it
has recently been demonstrated that attention can also play an important role
in the context of applying artificial neural networks to a variety of tasks
from fields such as computer vision and NLP. In this work we show that, by
properly defining attention for convolutional neural networks, we can actually
use this type of information in order to significantly improve the performance
of a student CNN network by forcing it to mimic the attention maps of a
powerful teacher network. To that end, we propose several novel methods of
transferring attention, showing consistent improvement across a variety of
datasets and convolutional neural network architectures. Code and models for
our experiments are available at
https://github.com/szagoruyko/attention-transfer | http://arxiv.org/pdf/1612.03928 | Sergey Zagoruyko, Nikos Komodakis | cs.CV | null | null | cs.CV | 20161212 | 20170212 | [] |
1612.03969 | 37 | # 6 CONCLUSION
Two closely related challenges in artiï¬cial intelligence are designing models which can maintain an estimate of the state of a world with complex dynamics over long timescales, and models which can predict the forward evolution of the state of the world from partial observation. In this paper, we introduced the Recurrent Entity Network, a new model that makes a promising step towards the ï¬rst goal. Our model is able to accurately track the world state while reading text stories, which enables it to set a new state-of-the-art on the bAbI tasks, the competitive benchmark of story understanding, by being the ï¬rst model to solve them all. We also showed that our model is able to capture simple dynamics over long timescales, and is able to perform competitively on a real-world dataset.
Although our model was able to solve all the bAbI tasks using 10k training samples, we found that performance dropped considerably when using only 1k samples (see Appendix). Most recent work on the bAbI tasks has focused on the 10k samples setting, and we would like to emphasize that solving them in the 1k samples setting remains an open problem which will require improving the sample efï¬ciency of reasoning models, including ours. | 1612.03969#37 | Tracking the World State with Recurrent Entity Networks | We introduce a new model, the Recurrent Entity Network (EntNet). It is
equipped with a dynamic long-term memory which allows it to maintain and update
a representation of the state of the world as it receives new data. For
language understanding tasks, it can reason on-the-fly as it reads text, not
just when it is required to answer a question or respond as is the case for a
Memory Network (Sukhbaatar et al., 2015). Like a Neural Turing Machine or
Differentiable Neural Computer (Graves et al., 2014; 2016) it maintains a fixed
size memory and can learn to perform location and content-based read and write
operations. However, unlike those models it has a simple parallel architecture
in which several memory locations can be updated simultaneously. The EntNet
sets a new state-of-the-art on the bAbI tasks, and is the first method to solve
all the tasks in the 10k training examples setting. We also demonstrate that it
can solve a reasoning task which requires a large number of supporting facts,
which other methods are not able to solve, and can generalize past its training
horizon. It can also be practically used on large scale datasets such as
Children's Book Test, where it obtains competitive performance, reading the
story in a single pass. | http://arxiv.org/pdf/1612.03969 | Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, Yann LeCun | cs.CL | null | ICLR 2017 | cs.CL | 20161212 | 20170510 | [
{
"id": "1503.01007"
},
{
"id": "1606.03126"
},
{
"id": "1605.07427"
}
] |
1612.03651 | 38 | Sida Wang and Christopher D Manning. Baselines and bigrams: Simple, good sentiment and topic classiï¬cation. In ACL, 2012.
Kilian Q Weinberger, Anirban Dasgupta, John Langford, Alex Smola, and Josh Attenberg. Feature hashing for large scale multitask learning. In ICML, 2009.
Yair Weiss, Antonio Torralba, and Rob Fergus. Spectral hashing. In NIPS, December 2009.
Yijun Xiao and Kyunghyun Cho. Efï¬cient character-level document classiï¬cation by combining convolution and recurrent layers. arXiv preprint arXiv:1602.00367, 2016.
Xiang Zhang, Junbo Zhao, and Yann LeCun. Character-level convolutional networks for text clas- siï¬cation. In NIPS, 2015.
10
# Under review as a conference paper at ICLR 2017
# APPENDIX
In the appendix, we show some additional results. The model used in these experiments only had 1M ngram buckets. In Table 5, we show a thorough comparison of LSH, PQ and OPQ on 8 different datasets. Table 7 summarizes the comparison with CNNs in terms of accuracy and size. Table 8 show a thorough comparison of the hashing trick and the Bloom ï¬lters. | 1612.03651#38 | FastText.zip: Compressing text classification models | We consider the problem of producing compact architectures for text
classification, such that the full model fits in a limited amount of memory.
After considering different solutions inspired by the hashing literature, we
propose a method built upon product quantization to store word embeddings.
While the original technique leads to a loss in accuracy, we adapt this method
to circumvent quantization artefacts. Our experiments carried out on several
benchmarks show that our approach typically requires two orders of magnitude
less memory than fastText while being only slightly inferior with respect to
accuracy. As a result, it outperforms the state of the art by a good margin in
terms of the compromise between memory usage and accuracy. | http://arxiv.org/pdf/1612.03651 | Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Hérve Jégou, Tomas Mikolov | cs.CL, cs.LG | Submitted to ICLR 2017 | null | cs.CL | 20161212 | 20161212 | [
{
"id": "1510.03009"
},
{
"id": "1607.01759"
},
{
"id": "1602.02830"
},
{
"id": "1602.00367"
},
{
"id": "1512.04906"
},
{
"id": "1609.04309"
},
{
"id": "1609.06753"
}
] |
1612.03928 | 38 | Min Lin, Qiang Chen, and Shuicheng Yan. Network in network. CoRR, abs/1312.4400, 2013.
Recurrent mod- In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, els of visual attention. and K. Q. Weinberger (eds.), Advances in Neural Information Processing Systems 27, pp. 2204â2212. Curran Associates, Inc., 2014. URL http://papers.nips.cc/paper/ 5542-recurrent-models-of-visual-attention.pdf.
M. Oquab, L. Bottou, I. Laptev, and J. Sivic. Is object localization for free? weakly-supervised learning with convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015.
O. M. Parkhi, A. Vedaldi, and A. Zisserman. Deep face recognition. In British Machine Vision Conference, 2015.
A. Quattoni and A. Torralba. Recognizing indoor scenes. In CVPR, 2009.
Ronald A. Rensink. The dynamic representation of scenes. In Visual Cognition, pp. 17â42, 2000. | 1612.03928#38 | Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer | Attention plays a critical role in human visual experience. Furthermore, it
has recently been demonstrated that attention can also play an important role
in the context of applying artificial neural networks to a variety of tasks
from fields such as computer vision and NLP. In this work we show that, by
properly defining attention for convolutional neural networks, we can actually
use this type of information in order to significantly improve the performance
of a student CNN network by forcing it to mimic the attention maps of a
powerful teacher network. To that end, we propose several novel methods of
transferring attention, showing consistent improvement across a variety of
datasets and convolutional neural network architectures. Code and models for
our experiments are available at
https://github.com/szagoruyko/attention-transfer | http://arxiv.org/pdf/1612.03928 | Sergey Zagoruyko, Nikos Komodakis | cs.CV | null | null | cs.CV | 20161212 | 20170212 | [] |
1612.03969 | 38 | Recent works have made some progress towards the second goal of forward modeling, for instance in capturing simple physics (Lerer et al., 2016), predicting future frames in video (Mathieu et al., 2015) or responses in dialog (Weston, 2016). Although we have only applied our model to tasks
9
Published as a conference paper at ICLR 2017
with textual inputs in this work, the architecture is general and future work should investigate how to combine the EntNetâs tracking abilities with such predictive models.
# REFERENCES
Bhuwan Dhingra, Hanxiao Liu, William Cohen and Salakhutdinov, Ruslan. attention readers text comprehension. http://arxiv.org/abs/1606.01549. for CoRR, abs/1606.01549, 2016. Gated- URL
Chandar, Sarath, Ahn, Sungjin, Larochelle, Hugo, Vincent, Pascal, Tesauro, Gerald, and Bengio, Yoshua. Hierarchical memory networks. arXiv preprint arXiv:1605.07427, 2016. | 1612.03969#38 | Tracking the World State with Recurrent Entity Networks | We introduce a new model, the Recurrent Entity Network (EntNet). It is
equipped with a dynamic long-term memory which allows it to maintain and update
a representation of the state of the world as it receives new data. For
language understanding tasks, it can reason on-the-fly as it reads text, not
just when it is required to answer a question or respond as is the case for a
Memory Network (Sukhbaatar et al., 2015). Like a Neural Turing Machine or
Differentiable Neural Computer (Graves et al., 2014; 2016) it maintains a fixed
size memory and can learn to perform location and content-based read and write
operations. However, unlike those models it has a simple parallel architecture
in which several memory locations can be updated simultaneously. The EntNet
sets a new state-of-the-art on the bAbI tasks, and is the first method to solve
all the tasks in the 10k training examples setting. We also demonstrate that it
can solve a reasoning task which requires a large number of supporting facts,
which other methods are not able to solve, and can generalize past its training
horizon. It can also be practically used on large scale datasets such as
Children's Book Test, where it obtains competitive performance, reading the
story in a single pass. | http://arxiv.org/pdf/1612.03969 | Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, Yann LeCun | cs.CL | null | ICLR 2017 | cs.CL | 20161212 | 20170510 | [
{
"id": "1503.01007"
},
{
"id": "1606.03126"
},
{
"id": "1605.07427"
}
] |
1612.03651 | 39 | Quant. m r o k n AG Amz. f. Amz. p. DBP Sogou Yah. Yelp f. Yelp p. full full,nodict 92.1 36M 59.8 97M 94.5 104M 98.4 67M 96.3 47M 92.1 34M 59.9 78M 94.5 72 83M 98.4 56M 96.3 42M 72.2 120M 63.7 56M 95.7 53M 91M 63.6 48M 95.6 46M LSH PQ OPQ LSH PQ OPQ 8 8 8 8 8 8 x x x 88.7 8.5M 51.3 20M 90.3 91.7 8.5M 59.3 20M 94.4 91.9 8.5M 59.3 20M 94.4 91.9 9.5M 59.4 22M 94.5 92.0 9.5M 59.8 22M 94.5 92.1 9.5M 59.9 22M 94.5 21M 92.7 14M 94.2 11M 54.8 21M 97.4 14M 96.1 11M 71.3 21M 96.9 14M 95.8 11M 71.4 24M 97.8 16M 96.2 12M 71.6 24M 98.4 16M | 1612.03651#39 | FastText.zip: Compressing text classification models | We consider the problem of producing compact architectures for text
classification, such that the full model fits in a limited amount of memory.
After considering different solutions inspired by the hashing literature, we
propose a method built upon product quantization to store word embeddings.
While the original technique leads to a loss in accuracy, we adapt this method
to circumvent quantization artefacts. Our experiments carried out on several
benchmarks show that our approach typically requires two orders of magnitude
less memory than fastText while being only slightly inferior with respect to
accuracy. As a result, it outperforms the state of the art by a good margin in
terms of the compromise between memory usage and accuracy. | http://arxiv.org/pdf/1612.03651 | Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Hérve Jégou, Tomas Mikolov | cs.CL, cs.LG | Submitted to ICLR 2017 | null | cs.CL | 20161212 | 20161212 | [
{
"id": "1510.03009"
},
{
"id": "1607.01759"
},
{
"id": "1602.02830"
},
{
"id": "1602.00367"
},
{
"id": "1512.04906"
},
{
"id": "1609.04309"
},
{
"id": "1609.06753"
}
] |
1612.03928 | 39 | Ronald A. Rensink. The dynamic representation of scenes. In Visual Cognition, pp. 17â42, 2000.
Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio. FitNets: Hints for thin deep nets. Technical Report Arxiv report 1412.6550, arXiv, 2014.
Ramprasaath R. Selvaraju, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-cam: Why did you say that? visual explanations from deep networks via gradient-based localization. 2016.
Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional networks: Visualising image classiï¬cation models and saliency maps. In ICLR Workshop, 2014.
J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller. Striving for simplicity: The all convolutional net. In arXiv:1412.6806, also appeared at ICLR 2015 Workshop Track, 2015. URL http://arxiv.org/abs/1412.6806. | 1612.03928#39 | Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer | Attention plays a critical role in human visual experience. Furthermore, it
has recently been demonstrated that attention can also play an important role
in the context of applying artificial neural networks to a variety of tasks
from fields such as computer vision and NLP. In this work we show that, by
properly defining attention for convolutional neural networks, we can actually
use this type of information in order to significantly improve the performance
of a student CNN network by forcing it to mimic the attention maps of a
powerful teacher network. To that end, we propose several novel methods of
transferring attention, showing consistent improvement across a variety of
datasets and convolutional neural network architectures. Code and models for
our experiments are available at
https://github.com/szagoruyko/attention-transfer | http://arxiv.org/pdf/1612.03928 | Sergey Zagoruyko, Nikos Komodakis | cs.CV | null | null | cs.CV | 20161212 | 20170212 | [] |
1612.03969 | 39 | On In Pro- the properties of neural machine translation: ceedings of SSST@EMNLP 2014, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, Doha, Qatar, 25 October 2014, pp. 103â111, 2014. URL http://aclweb.org/anthology/W/W14/W14-4012.pdf.
Collobert, Ronan, Kavukcuoglu, Koray, and Farabet, Clment. Torch7: A matlab-like environment for machine learning, 2011.
Cui, Yiming, Chen, Zhipeng, Wei, Si, Wang, Shijin, Liu, Ting, and Hu, Guoping. Attention- over-attention neural networks for reading comprehension. CoRR, abs/1607.04423, 2016. URL http://arxiv.org/abs/1607.04423.
Graves, Alex, Wayne, Greg, and Dnihelka, Ivo. Neural Turing Machines, September 2014. URL http://arxiv.org/abs/1410.5401. | 1612.03969#39 | Tracking the World State with Recurrent Entity Networks | We introduce a new model, the Recurrent Entity Network (EntNet). It is
equipped with a dynamic long-term memory which allows it to maintain and update
a representation of the state of the world as it receives new data. For
language understanding tasks, it can reason on-the-fly as it reads text, not
just when it is required to answer a question or respond as is the case for a
Memory Network (Sukhbaatar et al., 2015). Like a Neural Turing Machine or
Differentiable Neural Computer (Graves et al., 2014; 2016) it maintains a fixed
size memory and can learn to perform location and content-based read and write
operations. However, unlike those models it has a simple parallel architecture
in which several memory locations can be updated simultaneously. The EntNet
sets a new state-of-the-art on the bAbI tasks, and is the first method to solve
all the tasks in the 10k training examples setting. We also demonstrate that it
can solve a reasoning task which requires a large number of supporting facts,
which other methods are not able to solve, and can generalize past its training
horizon. It can also be practically used on large scale datasets such as
Children's Book Test, where it obtains competitive performance, reading the
story in a single pass. | http://arxiv.org/pdf/1612.03969 | Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, Yann LeCun | cs.CL | null | ICLR 2017 | cs.CL | 20161212 | 20170510 | [
{
"id": "1503.01007"
},
{
"id": "1606.03126"
},
{
"id": "1605.07427"
}
] |
1612.03651 | 40 | 71.3 21M 96.9 14M 95.8 11M 71.4 24M 97.8 16M 96.2 12M 71.6 24M 98.4 16M 96.3 12M 72.1 24M 98.4 16M 96.3 12M 72.2 23M 56.7 12M 92.2 12M 23M 62.8 12M 95.4 12M 23M 62.5 12M 95.4 12M 26M 63.4 14M 95.6 13M 26M 63.7 14M 95.6 13M 26M 63.6 14M 95.6 13M LSH PQ OPQ LSH PQ OPQ 4 4 4 4 4 4 x x x 88.3 4.3M 50.5 9.7M 88.9 91.6 4.3M 59.2 9.7M 94.4 91.7 4.3M 59.0 9.7M 94.4 92.1 5.3M 59.2 13M 94.4 92.1 5.3M 59.8 13M 94.5 92.2 5.3M 59.8 13M 94.5 11M 91.6 7.0M 94.3 5.3M 54.6 11M 96.3 7.0M 96.1 5.3M 71.0 11M 96.9 7.0M | 1612.03651#40 | FastText.zip: Compressing text classification models | We consider the problem of producing compact architectures for text
classification, such that the full model fits in a limited amount of memory.
After considering different solutions inspired by the hashing literature, we
propose a method built upon product quantization to store word embeddings.
While the original technique leads to a loss in accuracy, we adapt this method
to circumvent quantization artefacts. Our experiments carried out on several
benchmarks show that our approach typically requires two orders of magnitude
less memory than fastText while being only slightly inferior with respect to
accuracy. As a result, it outperforms the state of the art by a good margin in
terms of the compromise between memory usage and accuracy. | http://arxiv.org/pdf/1612.03651 | Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Hérve Jégou, Tomas Mikolov | cs.CL, cs.LG | Submitted to ICLR 2017 | null | cs.CL | 20161212 | 20161212 | [
{
"id": "1510.03009"
},
{
"id": "1607.01759"
},
{
"id": "1602.02830"
},
{
"id": "1602.00367"
},
{
"id": "1512.04906"
},
{
"id": "1609.04309"
},
{
"id": "1609.06753"
}
] |
1612.03928 | 40 | Rupesh Kumar Srivastava, Klaus Greff, and J¨urgen Schmidhuber. Highway networks. CoRR, abs/1505.00387, 2015.
C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie. The Caltech-UCSD Birds-200-2011 Dataset. Technical Report CNS-TR-2011-001, California Institute of Technology, 2011.
Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron C. Courville, Ruslan Salakhutdinov, Richard S. Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. CoRR, abs/1502.03044, 2015. URL http://arxiv.org/abs/1502. 03044.
10
Published as a conference paper at ICLR 2017
Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, and Alexander J. Smola. Stacked attention networks for image question answering. CoRR, abs/1511.02274, 2015. URL http://arxiv. org/abs/1511.02274.
Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. In BMVC, 2016. | 1612.03928#40 | Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer | Attention plays a critical role in human visual experience. Furthermore, it
has recently been demonstrated that attention can also play an important role
in the context of applying artificial neural networks to a variety of tasks
from fields such as computer vision and NLP. In this work we show that, by
properly defining attention for convolutional neural networks, we can actually
use this type of information in order to significantly improve the performance
of a student CNN network by forcing it to mimic the attention maps of a
powerful teacher network. To that end, we propose several novel methods of
transferring attention, showing consistent improvement across a variety of
datasets and convolutional neural network architectures. Code and models for
our experiments are available at
https://github.com/szagoruyko/attention-transfer | http://arxiv.org/pdf/1612.03928 | Sergey Zagoruyko, Nikos Komodakis | cs.CV | null | null | cs.CV | 20161212 | 20170212 | [] |
1612.03969 | 40 | Graves, Alex, Wayne, Greg, Reynolds, Malcolm, Harley, Tim, Danihelka, Ivo, Grabska-Barwi´nska, Agnieszka, Colmenarejo, Sergio G´omez, Grefenstette, Edward, Ramalho, Tiago, Agapiou, John, et al. Hybrid computing using a neural network with dynamic external memory. Nature, 2016.
Grefenstette, Edward, Hermann, Karl Moritz, Suleyman, Mustafa, and Blunsom, Phil. Learning to transduce with unbounded memory. In Advances in Neural Information Processing Systems, pp. 1828â1836, 2015.
Gulcehre, Caglar, Chandar, Sarath, Cho, Kyunghyun, and Bengio, Yoshua. Dynamic neural tur- ing machines with soft and hard addressing schemes. CoRR, abs/1607.00036, 2016. URL http://arxiv.org/abs/1607.00036.
He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Delving deep into rectiï¬ers: Surpass- ing human-level performance on imagenet classiï¬cation. CoRR, abs/1502.01852, 2015. | 1612.03969#40 | Tracking the World State with Recurrent Entity Networks | We introduce a new model, the Recurrent Entity Network (EntNet). It is
equipped with a dynamic long-term memory which allows it to maintain and update
a representation of the state of the world as it receives new data. For
language understanding tasks, it can reason on-the-fly as it reads text, not
just when it is required to answer a question or respond as is the case for a
Memory Network (Sukhbaatar et al., 2015). Like a Neural Turing Machine or
Differentiable Neural Computer (Graves et al., 2014; 2016) it maintains a fixed
size memory and can learn to perform location and content-based read and write
operations. However, unlike those models it has a simple parallel architecture
in which several memory locations can be updated simultaneously. The EntNet
sets a new state-of-the-art on the bAbI tasks, and is the first method to solve
all the tasks in the 10k training examples setting. We also demonstrate that it
can solve a reasoning task which requires a large number of supporting facts,
which other methods are not able to solve, and can generalize past its training
horizon. It can also be practically used on large scale datasets such as
Children's Book Test, where it obtains competitive performance, reading the
story in a single pass. | http://arxiv.org/pdf/1612.03969 | Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, Yann LeCun | cs.CL | null | ICLR 2017 | cs.CL | 20161212 | 20170510 | [
{
"id": "1503.01007"
},
{
"id": "1606.03126"
},
{
"id": "1605.07427"
}
] |
1612.03651 | 41 | 94.3 5.3M 54.6 11M 96.3 7.0M 96.1 5.3M 71.0 11M 96.9 7.0M 95.6 5.3M 71.2 13M 97.7 8.8M 96.2 6.6M 71.1 13M 98.4 8.8M 96.3 6.6M 72.0 13M 98.3 8.8M 96.3 6.6M 72.1 12M 56.5 6.0M 92.9 5.7M 12M 62.2 6.0M 95.4 5.7M 12M 62.6 6.0M 95.4 5.7M 15M 63.1 7.4M 95.5 7.2M 15M 63.6 7.5M 95.6 7.2M 15M 63.7 7.5M 95.6 7.2M LSH PQ OPQ LSH PQ OPQ 2 2 2 2 2 2 x x x 87.7 2.2M 50.1 4.9M 88.9 5.2M 90.6 3.5M 93.9 2.7M 51.4 5.7M 56.6 3.0M 91.3 2.9M 91.1 2.2M 58.7 4.9M 94.4 5.2M | 1612.03651#41 | FastText.zip: Compressing text classification models | We consider the problem of producing compact architectures for text
classification, such that the full model fits in a limited amount of memory.
After considering different solutions inspired by the hashing literature, we
propose a method built upon product quantization to store word embeddings.
While the original technique leads to a loss in accuracy, we adapt this method
to circumvent quantization artefacts. Our experiments carried out on several
benchmarks show that our approach typically requires two orders of magnitude
less memory than fastText while being only slightly inferior with respect to
accuracy. As a result, it outperforms the state of the art by a good margin in
terms of the compromise between memory usage and accuracy. | http://arxiv.org/pdf/1612.03651 | Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Hérve Jégou, Tomas Mikolov | cs.CL, cs.LG | Submitted to ICLR 2017 | null | cs.CL | 20161212 | 20161212 | [
{
"id": "1510.03009"
},
{
"id": "1607.01759"
},
{
"id": "1602.02830"
},
{
"id": "1602.00367"
},
{
"id": "1512.04906"
},
{
"id": "1609.04309"
},
{
"id": "1609.06753"
}
] |
1612.03928 | 41 | Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. In BMVC, 2016.
Matthew Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In ECCV, 2014.
Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Learning deep features for discriminative localization. In Computer Vision and Pattern Recognition, 2016.
11
Published as a conference paper at ICLR 2017
# A APPENDIX
A.1 FIGURES AND TABLES
videostore Ws bookshop, bookstore, bookstall _ videostore 0 videostore videostore 2 01234567 01234567 01234567 hospitalroom hospitalroom hospitalroom 0 1 2 3 4 5 6 7 dentaloffice dentaloffice dentaloffice dentaloffice 10 01234567 01234567 01234567 01234567 ResNet-18-ImageNet ResNet-18-scenes ResNet-18-scenes-AT ResNet-34-scenes | 1612.03928#41 | Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer | Attention plays a critical role in human visual experience. Furthermore, it
has recently been demonstrated that attention can also play an important role
in the context of applying artificial neural networks to a variety of tasks
from fields such as computer vision and NLP. In this work we show that, by
properly defining attention for convolutional neural networks, we can actually
use this type of information in order to significantly improve the performance
of a student CNN network by forcing it to mimic the attention maps of a
powerful teacher network. To that end, we propose several novel methods of
transferring attention, showing consistent improvement across a variety of
datasets and convolutional neural network architectures. Code and models for
our experiments are available at
https://github.com/szagoruyko/attention-transfer | http://arxiv.org/pdf/1612.03928 | Sergey Zagoruyko, Nikos Komodakis | cs.CV | null | null | cs.CV | 20161212 | 20170212 | [] |
1612.03969 | 41 | Hill, Felix, Bordes, Antoine, Chopra, Sumit, and Weston, Jason. The goldilocks principle: Read- ing childrenâs books with explicit memory representations. In Proceedings of the International Conference on Learning Representations. 2016.
Hochreiter, Sepp and Schmidhuber, J¨urgen. Long short-term memory. Neural Comput., 9(8): doi: 10.1162/neco.1997.9.8.1735. URL 1735â1780, November 1997. http://dx.doi.org/10.1162/neco.1997.9.8.1735. ISSN 0899-7667.
Joulin, Armand and Mikolov, Tomas. Inferring algorithmic patterns with stack-augmented recurrent nets. arXiv preprint arXiv:1503.01007, 2015.
Kadlec, Rudolf, Schmid, Martin, Bajgar, Ondrej, and Kleindienst, Text under- Jan. CoRR, abs/1603.01547, 2016. URL standing with the attention sum reader network. http://arxiv.org/abs/1603.01547. | 1612.03969#41 | Tracking the World State with Recurrent Entity Networks | We introduce a new model, the Recurrent Entity Network (EntNet). It is
equipped with a dynamic long-term memory which allows it to maintain and update
a representation of the state of the world as it receives new data. For
language understanding tasks, it can reason on-the-fly as it reads text, not
just when it is required to answer a question or respond as is the case for a
Memory Network (Sukhbaatar et al., 2015). Like a Neural Turing Machine or
Differentiable Neural Computer (Graves et al., 2014; 2016) it maintains a fixed
size memory and can learn to perform location and content-based read and write
operations. However, unlike those models it has a simple parallel architecture
in which several memory locations can be updated simultaneously. The EntNet
sets a new state-of-the-art on the bAbI tasks, and is the first method to solve
all the tasks in the 10k training examples setting. We also demonstrate that it
can solve a reasoning task which requires a large number of supporting facts,
which other methods are not able to solve, and can generalize past its training
horizon. It can also be practically used on large scale datasets such as
Children's Book Test, where it obtains competitive performance, reading the
story in a single pass. | http://arxiv.org/pdf/1612.03969 | Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, Yann LeCun | cs.CL | null | ICLR 2017 | cs.CL | 20161212 | 20170510 | [
{
"id": "1503.01007"
},
{
"id": "1606.03126"
},
{
"id": "1605.07427"
}
] |
1612.03651 | 42 | 5.7M 56.6 3.0M 91.3 2.9M 91.1 2.2M 58.7 4.9M 94.4 5.2M 87.1 3.6M 95.3 2.7M 69.5 5.7M 62.1 3.0M 95.4 2.9M 91.4 2.2M 58.2 4.9M 94.3 5.2M 91.6 3.6M 94.2 2.7M 69.6 5.7M 62.1 3.0M 95.4 2.9M 91.8 3.2M 58.6 7.3M 94.3 7.8M 97.1 5.3M 96.1 4.0M 69.7 8.6M 62.7 4.5M 95.5 4.3M 91.9 3.2M 59.6 7.3M 94.5 7.8M 98.1 5.3M 96.3 4.0M 71.3 8.6M 63.4 4.5M 95.6 4.3M 92.1 3.2M 59.5 7.3M 94.5 7.8M 98.1 5.3M 96.2 4.0M 71.5 8.6M 63.4 4.5M 95.6 4.3M | 1612.03651#42 | FastText.zip: Compressing text classification models | We consider the problem of producing compact architectures for text
classification, such that the full model fits in a limited amount of memory.
After considering different solutions inspired by the hashing literature, we
propose a method built upon product quantization to store word embeddings.
While the original technique leads to a loss in accuracy, we adapt this method
to circumvent quantization artefacts. Our experiments carried out on several
benchmarks show that our approach typically requires two orders of magnitude
less memory than fastText while being only slightly inferior with respect to
accuracy. As a result, it outperforms the state of the art by a good margin in
terms of the compromise between memory usage and accuracy. | http://arxiv.org/pdf/1612.03651 | Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Hérve Jégou, Tomas Mikolov | cs.CL, cs.LG | Submitted to ICLR 2017 | null | cs.CL | 20161212 | 20161212 | [
{
"id": "1510.03009"
},
{
"id": "1607.01759"
},
{
"id": "1602.02830"
},
{
"id": "1602.00367"
},
{
"id": "1512.04906"
},
{
"id": "1609.04309"
},
{
"id": "1609.06753"
}
] |
1612.03928 | 42 | Figure 6: Top activation attention maps for different Scenes networks: original pretrained ResNet-18 (ResNet-18-ImageNet), ResNet-18 trained on Scenes (ResNet-18-scenes), ResNet-18 trained with attention transfer (ResNet-18-scenes-AT) with ResNet-34 as a teacher, ResNet-34 trained on Scenes (ResNet-34-scenes). Predicted classes for each task are shown on top. Attention maps look more similar after transfer (images taken from test set).
â ResNet-18 â_ ResNet-18-ResNet-34-AT top-5 error, % 15 Epoch
# error, % | 1612.03928#42 | Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer | Attention plays a critical role in human visual experience. Furthermore, it
has recently been demonstrated that attention can also play an important role
in the context of applying artificial neural networks to a variety of tasks
from fields such as computer vision and NLP. In this work we show that, by
properly defining attention for convolutional neural networks, we can actually
use this type of information in order to significantly improve the performance
of a student CNN network by forcing it to mimic the attention maps of a
powerful teacher network. To that end, we propose several novel methods of
transferring attention, showing consistent improvement across a variety of
datasets and convolutional neural network architectures. Code and models for
our experiments are available at
https://github.com/szagoruyko/attention-transfer | http://arxiv.org/pdf/1612.03928 | Sergey Zagoruyko, Nikos Komodakis | cs.CV | null | null | cs.CV | 20161212 | 20170212 | [] |
1612.03969 | 42 | Kingma, Diederik P. and Ba, Jimmy. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014. URL http://arxiv.org/abs/1412.6980.
10
Published as a conference paper at ICLR 2017
intuition of block tow- In Proceedings of the 33nd International Conference on Machine Learn- ers by example. ing, ICML 2016, New York City, NY, USA, June 19-24, 2016, pp. 430â438, 2016. URL http://jmlr.org/proceedings/papers/v48/lerer16.html.
Li, Yujia, Tarlow, Daniel, Brockschmidt, Marc, and Zemel, Richard S. Gated graph sequence neural networks. CoRR, abs/1511.05493, 2015. URL http://arxiv.org/abs/1511.05493.
Mathieu, Micha¨el, Couprie, Camille, prediction beyond mean square http://arxiv.org/abs/1511.05440. and LeCun, Yann. CoRR, Deep multi-scale video URL error. abs/1511.05440, 2015. | 1612.03969#42 | Tracking the World State with Recurrent Entity Networks | We introduce a new model, the Recurrent Entity Network (EntNet). It is
equipped with a dynamic long-term memory which allows it to maintain and update
a representation of the state of the world as it receives new data. For
language understanding tasks, it can reason on-the-fly as it reads text, not
just when it is required to answer a question or respond as is the case for a
Memory Network (Sukhbaatar et al., 2015). Like a Neural Turing Machine or
Differentiable Neural Computer (Graves et al., 2014; 2016) it maintains a fixed
size memory and can learn to perform location and content-based read and write
operations. However, unlike those models it has a simple parallel architecture
in which several memory locations can be updated simultaneously. The EntNet
sets a new state-of-the-art on the bAbI tasks, and is the first method to solve
all the tasks in the 10k training examples setting. We also demonstrate that it
can solve a reasoning task which requires a large number of supporting facts,
which other methods are not able to solve, and can generalize past its training
horizon. It can also be practically used on large scale datasets such as
Children's Book Test, where it obtains competitive performance, reading the
story in a single pass. | http://arxiv.org/pdf/1612.03969 | Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, Yann LeCun | cs.CL | null | ICLR 2017 | cs.CL | 20161212 | 20170510 | [
{
"id": "1503.01007"
},
{
"id": "1606.03126"
},
{
"id": "1605.07427"
}
] |
1612.03928 | 43 | (a) Attention transfer on ImageNet between ResNet- 18 and ResNet-34. Solid lines represent top-5 valida- tion error, dashed - top-5 training error. Two attention transfer losses were used on the outputs of two last groups of residual blocks respectively, no KD losses used.
(b) Activation attention transfer on CIFAR-10 from WRN-16-2 to WRN-16-1. Test error is in bold, train error is in dashed lines. Attention transfer greatly speeds up convergence and improves ï¬nal accuracy.
Figure 7
12
Published as a conference paper at ICLR 2017
Model ResNet-18 AT ResNet-34 top1, top5 30.4, 10.8 29.3, 10.0 26.1, 8.3
Table 5: Attention transfer validation error (single crop) on ImageNet. Transfer losses are added on epoch 60/100.
IMPLEMENTATION DETAILS | 1612.03928#43 | Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer | Attention plays a critical role in human visual experience. Furthermore, it
has recently been demonstrated that attention can also play an important role
in the context of applying artificial neural networks to a variety of tasks
from fields such as computer vision and NLP. In this work we show that, by
properly defining attention for convolutional neural networks, we can actually
use this type of information in order to significantly improve the performance
of a student CNN network by forcing it to mimic the attention maps of a
powerful teacher network. To that end, we propose several novel methods of
transferring attention, showing consistent improvement across a variety of
datasets and convolutional neural network architectures. Code and models for
our experiments are available at
https://github.com/szagoruyko/attention-transfer | http://arxiv.org/pdf/1612.03928 | Sergey Zagoruyko, Nikos Komodakis | cs.CV | null | null | cs.CV | 20161212 | 20170212 | [] |
1612.03969 | 43 | Miller, Alexander, Fisch, Adam, Dodge, Jesse, Karimi, Amir-Hossein, Bordes, Antoine, and We- arXiv preprint ston, Jason. Key-value memory networks for directly reading documents. arXiv:1606.03126, 2016.
Munkhdalai, Tsendsuren and Yu, Hong. ral networks language comprehension. https://arxiv.org/abs/1610.06454. for Reasoning with memory augmented neu- URL CoRR, abs/1610.06454, 2016.
End- In Cortes, C., Lawrence, N. D., Lee, D. D., to-end memory networks. Information Pro- Sugiyama, M., cessing Systems URL 2015. http://papers.nips.cc/paper/5846-end-to-end-memory-networks.pdf.
Sukhbaatar, Sainbayar, communication with http://arxiv.org/abs/1605.07736. Szlam, Arthur, backpropagation. and Fergus, Rob. CoRR, abs/1605.07736, Learning multiagent URL 2016. | 1612.03969#43 | Tracking the World State with Recurrent Entity Networks | We introduce a new model, the Recurrent Entity Network (EntNet). It is
equipped with a dynamic long-term memory which allows it to maintain and update
a representation of the state of the world as it receives new data. For
language understanding tasks, it can reason on-the-fly as it reads text, not
just when it is required to answer a question or respond as is the case for a
Memory Network (Sukhbaatar et al., 2015). Like a Neural Turing Machine or
Differentiable Neural Computer (Graves et al., 2014; 2016) it maintains a fixed
size memory and can learn to perform location and content-based read and write
operations. However, unlike those models it has a simple parallel architecture
in which several memory locations can be updated simultaneously. The EntNet
sets a new state-of-the-art on the bAbI tasks, and is the first method to solve
all the tasks in the 10k training examples setting. We also demonstrate that it
can solve a reasoning task which requires a large number of supporting facts,
which other methods are not able to solve, and can generalize past its training
horizon. It can also be practically used on large scale datasets such as
Children's Book Test, where it obtains competitive performance, reading the
story in a single pass. | http://arxiv.org/pdf/1612.03969 | Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, Yann LeCun | cs.CL | null | ICLR 2017 | cs.CL | 20161212 | 20170510 | [
{
"id": "1503.01007"
},
{
"id": "1606.03126"
},
{
"id": "1605.07427"
}
] |
1612.03651 | 44 | k co AG Amz. f. Amz. p. DBP Sogou Yah. Yelp f. Yelp p. full, nodict full 8 full 4 full 2 92.1 34M 59.8 78M 94.5 83M 98.4 56M 96.3 42M 72.2 91M 63.7 48M 95.6 46M 92.0 9.5M 59.8 22M 94.5 24M 98.4 16M 96.3 12M 72.1 26M 63.7 14M 95.6 13M 92.1 5.3M 59.8 13M 94.5 13M 98.4 8.8M 96.3 6.6M 72 15M 63.6 7.5M 95.6 7.2M 91.9 3.2M 59.6 7.3M 94.5 7.8M 98.1 5.3M 96.3 4.0M 71.3 8.6M 63.4 4.5M 95.6 4.3M 8 8 8 8 200K 92.0 2.5M 59.7 2.5M 94.3 2.5M 98.5 2.5M 96.6 2.5M 71.8 2.5M 63.3 2.5M 95.6 2.5M 100K 91.9 1.3M | 1612.03651#44 | FastText.zip: Compressing text classification models | We consider the problem of producing compact architectures for text
classification, such that the full model fits in a limited amount of memory.
After considering different solutions inspired by the hashing literature, we
propose a method built upon product quantization to store word embeddings.
While the original technique leads to a loss in accuracy, we adapt this method
to circumvent quantization artefacts. Our experiments carried out on several
benchmarks show that our approach typically requires two orders of magnitude
less memory than fastText while being only slightly inferior with respect to
accuracy. As a result, it outperforms the state of the art by a good margin in
terms of the compromise between memory usage and accuracy. | http://arxiv.org/pdf/1612.03651 | Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Hérve Jégou, Tomas Mikolov | cs.CL, cs.LG | Submitted to ICLR 2017 | null | cs.CL | 20161212 | 20161212 | [
{
"id": "1510.03009"
},
{
"id": "1607.01759"
},
{
"id": "1602.02830"
},
{
"id": "1602.00367"
},
{
"id": "1512.04906"
},
{
"id": "1609.04309"
},
{
"id": "1609.06753"
}
] |
1612.03928 | 44 | Table 5: Attention transfer validation error (single crop) on ImageNet. Transfer losses are added on epoch 60/100.
IMPLEMENTATION DETAILS
The experiments were conducted in Torch machine learning framework. Double propagation can be implemented in a modern framework with automatic differentiation support, e.g. Torch, Theano, Tensorï¬ow. For ImageNet experiments we used fb.resnet.torch code, and used 2 Titan X cards with data parallelizm in both teacher and student to speed up training. Code and models for our experi- ments are available at https://github.com/szagoruyko/attention-transfer.
13 | 1612.03928#44 | Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer | Attention plays a critical role in human visual experience. Furthermore, it
has recently been demonstrated that attention can also play an important role
in the context of applying artificial neural networks to a variety of tasks
from fields such as computer vision and NLP. In this work we show that, by
properly defining attention for convolutional neural networks, we can actually
use this type of information in order to significantly improve the performance
of a student CNN network by forcing it to mimic the attention maps of a
powerful teacher network. To that end, we propose several novel methods of
transferring attention, showing consistent improvement across a variety of
datasets and convolutional neural network architectures. Code and models for
our experiments are available at
https://github.com/szagoruyko/attention-transfer | http://arxiv.org/pdf/1612.03928 | Sergey Zagoruyko, Nikos Komodakis | cs.CV | null | null | cs.CV | 20161212 | 20170212 | [] |
1612.03969 | 44 | Trischler, Adam, Ye, Zheng, Yuan, Xingdi, guage comprehension with the epireader. http://arxiv.org/abs/1606.02270. and Suleman, Kaheer. CoRR, abs/1606.02270, 2016. Natural lan- URL
Weston, Jason. Dialog-based language learning. CoRR, abs/1604.06045, 2016. URL http://arxiv.org/abs/1604.06045.
Weston, Jason, Chopra, Sumit, and Bordes, Antoine. Memory networks. CoRR, abs/1410.3916, 2014. URL http://arxiv.org/abs/1410.3916.
Weston, Jason, Bordes, Antoine, Chopra, Sumit, and Mikolov, Tomas. Towards ai-complete question answering: A set of prerequisite toy tasks. CoRR, abs/1502.05698, 2015. URL http://arxiv.org/abs/1502.05698.
Xiong, Caiming, Merity, Stephen, and Socher, Richard. Dynamic memory networks for visual and textual question answering. In ICML, 2016.
# A TRAINING DETAILS | 1612.03969#44 | Tracking the World State with Recurrent Entity Networks | We introduce a new model, the Recurrent Entity Network (EntNet). It is
equipped with a dynamic long-term memory which allows it to maintain and update
a representation of the state of the world as it receives new data. For
language understanding tasks, it can reason on-the-fly as it reads text, not
just when it is required to answer a question or respond as is the case for a
Memory Network (Sukhbaatar et al., 2015). Like a Neural Turing Machine or
Differentiable Neural Computer (Graves et al., 2014; 2016) it maintains a fixed
size memory and can learn to perform location and content-based read and write
operations. However, unlike those models it has a simple parallel architecture
in which several memory locations can be updated simultaneously. The EntNet
sets a new state-of-the-art on the bAbI tasks, and is the first method to solve
all the tasks in the 10k training examples setting. We also demonstrate that it
can solve a reasoning task which requires a large number of supporting facts,
which other methods are not able to solve, and can generalize past its training
horizon. It can also be practically used on large scale datasets such as
Children's Book Test, where it obtains competitive performance, reading the
story in a single pass. | http://arxiv.org/pdf/1612.03969 | Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, Yann LeCun | cs.CL | null | ICLR 2017 | cs.CL | 20161212 | 20170510 | [
{
"id": "1503.01007"
},
{
"id": "1606.03126"
},
{
"id": "1605.07427"
}
] |
1612.03651 | 45 | 96.6 2.5M 71.8 2.5M 63.3 2.5M 95.6 2.5M 100K 91.9 1.3M 59.5 1.3M 94.3 1.3M 98.5 1.3M 96.6 1.3M 71.6 1.3M 63.4 1.3M 95.6 1.3M 50K 91.7 645K 59.7 645K 94.3 644K 98.5 645K 96.6 645K 71.5 645K 63.2 645K 95.6 644K 10K 91.3 137K 58.6 137K 93.2 137K 98.5 137K 96.5 137K 71.3 137K 63.3 137K 95.4 137K 4 4 4 4 200K 92.0 1.8M 59.7 1.8M 94.3 1.8M 98.5 1.8M 96.6 1.8M 71.7 1.8M 63.3 1.8M 95.6 1.8M 100K 91.9 889K 59.5 889K 94.4 889K 98.5 889K 96.6 889K 71.7 889K 63.4 889K 95.6 889K 50K 91.7 449K 59.6 449K 94.3 449K 98.5 450K 96.6 | 1612.03651#45 | FastText.zip: Compressing text classification models | We consider the problem of producing compact architectures for text
classification, such that the full model fits in a limited amount of memory.
After considering different solutions inspired by the hashing literature, we
propose a method built upon product quantization to store word embeddings.
While the original technique leads to a loss in accuracy, we adapt this method
to circumvent quantization artefacts. Our experiments carried out on several
benchmarks show that our approach typically requires two orders of magnitude
less memory than fastText while being only slightly inferior with respect to
accuracy. As a result, it outperforms the state of the art by a good margin in
terms of the compromise between memory usage and accuracy. | http://arxiv.org/pdf/1612.03651 | Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Hérve Jégou, Tomas Mikolov | cs.CL, cs.LG | Submitted to ICLR 2017 | null | cs.CL | 20161212 | 20161212 | [
{
"id": "1510.03009"
},
{
"id": "1607.01759"
},
{
"id": "1602.02830"
},
{
"id": "1602.00367"
},
{
"id": "1512.04906"
},
{
"id": "1609.04309"
},
{
"id": "1609.06753"
}
] |
1612.03969 | 45 | # A TRAINING DETAILS
All models were implemented using Torch (Collobert et al., 2011). In all experiments, we initialized our model by drawing weights from a Gaussian distribution with mean zero and standard deviation 0.1, except for the PReLU slopes and encoder weights which were initialized to 1. Note that the PReLU initialization is related to two of the heuristics used in (Sukhbaatar et al., 2015), namely starting training with a purely linear model, and adding non-linearities to half of the hidden units. Our initialization allows the model to choose when and how much to enter the non-linear regime. Initializing the encoder weights to 1 corresponds to beginning with a BoW encoding, which the model can then choose to modify. The initial values of the memory slots were initialized to the key values, which we found to help performance. Optimization was done with SGD or ADAM using minibatches of size 32, and gradients with norm greater than 40 were clipped to 40. A null symbol whose embedding was constrained to be zero was used to pad all sentences or windows to a ï¬xed size.
11
Published as a conference paper at ICLR 2017
# B DETAILS OF WORLD MODEL EXPERIMENTS | 1612.03969#45 | Tracking the World State with Recurrent Entity Networks | We introduce a new model, the Recurrent Entity Network (EntNet). It is
equipped with a dynamic long-term memory which allows it to maintain and update
a representation of the state of the world as it receives new data. For
language understanding tasks, it can reason on-the-fly as it reads text, not
just when it is required to answer a question or respond as is the case for a
Memory Network (Sukhbaatar et al., 2015). Like a Neural Turing Machine or
Differentiable Neural Computer (Graves et al., 2014; 2016) it maintains a fixed
size memory and can learn to perform location and content-based read and write
operations. However, unlike those models it has a simple parallel architecture
in which several memory locations can be updated simultaneously. The EntNet
sets a new state-of-the-art on the bAbI tasks, and is the first method to solve
all the tasks in the 10k training examples setting. We also demonstrate that it
can solve a reasoning task which requires a large number of supporting facts,
which other methods are not able to solve, and can generalize past its training
horizon. It can also be practically used on large scale datasets such as
Children's Book Test, where it obtains competitive performance, reading the
story in a single pass. | http://arxiv.org/pdf/1612.03969 | Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, Yann LeCun | cs.CL | null | ICLR 2017 | cs.CL | 20161212 | 20170510 | [
{
"id": "1503.01007"
},
{
"id": "1606.03126"
},
{
"id": "1605.07427"
}
] |
1612.03651 | 46 | 71.7 889K 63.4 889K 95.6 889K 50K 91.7 449K 59.6 449K 94.3 449K 98.5 450K 96.6 449K 71.4 450K 63.2 449K 95.5 449K 98K 98K 10K 91.5 98K 58.6 98K 93.2 98K 98.5 96.5 98K 71.2 98K 63.3 98K 95.4 2 2 2 2 200K 91.9 1.4M 59.6 1.4M 94.3 1.4M 98.4 1.4M 96.5 1.4M 71.5 1.4M 63.2 1.4M 95.5 1.4M 100K 91.6 693K 59.5 693K 94.3 693K 98.4 694K 96.6 693K 71.1 694K 63.2 693K 95.6 693K 50K 91.6 352K 59.6 352K 94.3 352K 98.4 352K 96.5 352K 71.1 352K 63.2 352K 95.6 352K 78K 79K 10K 91.3 78K 58.5 78K 93.2 78K 98.4 96.5 78K 70.8 78K 63.2 78K 95.3 | 1612.03651#46 | FastText.zip: Compressing text classification models | We consider the problem of producing compact architectures for text
classification, such that the full model fits in a limited amount of memory.
After considering different solutions inspired by the hashing literature, we
propose a method built upon product quantization to store word embeddings.
While the original technique leads to a loss in accuracy, we adapt this method
to circumvent quantization artefacts. Our experiments carried out on several
benchmarks show that our approach typically requires two orders of magnitude
less memory than fastText while being only slightly inferior with respect to
accuracy. As a result, it outperforms the state of the art by a good margin in
terms of the compromise between memory usage and accuracy. | http://arxiv.org/pdf/1612.03651 | Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Hérve Jégou, Tomas Mikolov | cs.CL, cs.LG | Submitted to ICLR 2017 | null | cs.CL | 20161212 | 20161212 | [
{
"id": "1510.03009"
},
{
"id": "1607.01759"
},
{
"id": "1602.02830"
},
{
"id": "1602.00367"
},
{
"id": "1512.04906"
},
{
"id": "1609.04309"
},
{
"id": "1609.06753"
}
] |
1612.03969 | 46 | 11
Published as a conference paper at ICLR 2017
# B DETAILS OF WORLD MODEL EXPERIMENTS
Two agents are initially placed at random on a 10 à 10 grid with 100 distinct locations {(1, 1), (1, 2), ...(9, 10), (10, 10)}. At each time step an agent is chosen at random. There are two types of actions: the agent can face a given direction, or can move a number of steps ahead. Actions are sampled until a legal action is found by either choosing to change direction or move with equal probability. If they change direction, the direction is chosen between north, south, east and west with equal probability. If they move, the number of steps is randomly chosen between 1 and 5. A legal action is one which does not place the agent off the grid. Stories are given to the network in textual form, an example of which is below. The ï¬rst action after each agent is placed on the grid is to face a given direction. Therefore, the maximum number of actions made by one agent is T â 2. The network learns word embeddings for all words in the vocabulary such as locations, agent identiï¬ers and actions. At question time, the model must predict the correct answer (which will always be a location) from all the tokens in the vocabulary. | 1612.03969#46 | Tracking the World State with Recurrent Entity Networks | We introduce a new model, the Recurrent Entity Network (EntNet). It is
equipped with a dynamic long-term memory which allows it to maintain and update
a representation of the state of the world as it receives new data. For
language understanding tasks, it can reason on-the-fly as it reads text, not
just when it is required to answer a question or respond as is the case for a
Memory Network (Sukhbaatar et al., 2015). Like a Neural Turing Machine or
Differentiable Neural Computer (Graves et al., 2014; 2016) it maintains a fixed
size memory and can learn to perform location and content-based read and write
operations. However, unlike those models it has a simple parallel architecture
in which several memory locations can be updated simultaneously. The EntNet
sets a new state-of-the-art on the bAbI tasks, and is the first method to solve
all the tasks in the 10k training examples setting. We also demonstrate that it
can solve a reasoning task which requires a large number of supporting facts,
which other methods are not able to solve, and can generalize past its training
horizon. It can also be practically used on large scale datasets such as
Children's Book Test, where it obtains competitive performance, reading the
story in a single pass. | http://arxiv.org/pdf/1612.03969 | Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, Yann LeCun | cs.CL | null | ICLR 2017 | cs.CL | 20161212 | 20170510 | [
{
"id": "1503.01007"
},
{
"id": "1606.03126"
},
{
"id": "1605.07427"
}
] |
1612.03651 | 47 | Table 6: Comparison with different quantization and level of pruning. âcoâ is the cut-off parameter of the pruning.
11
# Under review as a conference paper at ICLR 2017
Dataset Zhang et al. (2015) Xiao & Cho (2016) AG Amz. f. Amz. p. DBP Sogou Yah. Yelp f. Yelp p. 90.2 59.5 94.5 98.3 95.1 70.5 61.6 94.8 108M 10.8M 10.8M 108M 108M 108M 108M 108M 91.4 59.2 94.1 98.6 95.2 71.4 61.8 94.5 80M 1.6M 1.6M 1.2M 1.6M 80M 1.4M 1.2M 91.9 59.6 94.3 98.5 96.5 71.7 63.3 95.5 889K 449K 449K 98K 98K 889K 98K 449K | 1612.03651#47 | FastText.zip: Compressing text classification models | We consider the problem of producing compact architectures for text
classification, such that the full model fits in a limited amount of memory.
After considering different solutions inspired by the hashing literature, we
propose a method built upon product quantization to store word embeddings.
While the original technique leads to a loss in accuracy, we adapt this method
to circumvent quantization artefacts. Our experiments carried out on several
benchmarks show that our approach typically requires two orders of magnitude
less memory than fastText while being only slightly inferior with respect to
accuracy. As a result, it outperforms the state of the art by a good margin in
terms of the compromise between memory usage and accuracy. | http://arxiv.org/pdf/1612.03651 | Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Hérve Jégou, Tomas Mikolov | cs.CL, cs.LG | Submitted to ICLR 2017 | null | cs.CL | 20161212 | 20161212 | [
{
"id": "1510.03009"
},
{
"id": "1607.01759"
},
{
"id": "1602.02830"
},
{
"id": "1602.00367"
},
{
"id": "1512.04906"
},
{
"id": "1609.04309"
},
{
"id": "1609.06753"
}
] |
1612.03969 | 47 | agent1 is at (2,8) agent1 faces-N agent2 is at (9,7) agent2 faces-N agent2 moves-2 agent2 faces-E agent2 moves-1 agent1 moves-1 agent2 faces-S agent2 moves-5 Q1: where is agent1 ? Q2: where is agent2 ? A1: (2,9) A2: (10,4)
# C ADDITIONAL RESULTS ON BABI TASKS
We provide some additional experiments on the bAbI tasks, in order to better understand the inï¬u- ence of architecture, weight tying, and amount of training data. Table 5 shows results when a simple BoW encoding is used for the inputs. Here, the EntNet still performs better than a MemN2N which uses the same encoding scheme, indicating that the architecture has an important effect. Tying the key vectors to entities did not help, and hurt performance for some tasks. Table 6 shows results when using only 1k training samples. In this setting, the EntNet performs worse than the MemN2N. | 1612.03969#47 | Tracking the World State with Recurrent Entity Networks | We introduce a new model, the Recurrent Entity Network (EntNet). It is
equipped with a dynamic long-term memory which allows it to maintain and update
a representation of the state of the world as it receives new data. For
language understanding tasks, it can reason on-the-fly as it reads text, not
just when it is required to answer a question or respond as is the case for a
Memory Network (Sukhbaatar et al., 2015). Like a Neural Turing Machine or
Differentiable Neural Computer (Graves et al., 2014; 2016) it maintains a fixed
size memory and can learn to perform location and content-based read and write
operations. However, unlike those models it has a simple parallel architecture
in which several memory locations can be updated simultaneously. The EntNet
sets a new state-of-the-art on the bAbI tasks, and is the first method to solve
all the tasks in the 10k training examples setting. We also demonstrate that it
can solve a reasoning task which requires a large number of supporting facts,
which other methods are not able to solve, and can generalize past its training
horizon. It can also be practically used on large scale datasets such as
Children's Book Test, where it obtains competitive performance, reading the
story in a single pass. | http://arxiv.org/pdf/1612.03969 | Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, Yann LeCun | cs.CL | null | ICLR 2017 | cs.CL | 20161212 | 20170510 | [
{
"id": "1503.01007"
},
{
"id": "1606.03126"
},
{
"id": "1605.07427"
}
] |
1612.03969 | 48 | Table 7 shows results for the EntNet and the DNC when models are trained on all tasks jointly. We report results for the mean performance across different random seeds (20 for the DNC, 5 for the EntNet), as well as the performance for the single best seed (measured by validation error). The DNC results for mean performance were taken from the appendix of Graves et al. (2016). The DNC has better performance in terms of the best seed, but also exhibits high variation across seeds, indicating that many different runs are required to achieve good performance. The EntNet exhibits less variation across runs and is able to solve more tasks consistently. Note that Table 2 reports DNC results with joint training, since results when training on each task separately were not available.
12
Published as a conference paper at ICLR 2017
Table 5: Error rates on bAbI Tasks with inputs are encoded using BoW. âTiedâ refers to the case where key vectors are tied with entity embeddings. | 1612.03969#48 | Tracking the World State with Recurrent Entity Networks | We introduce a new model, the Recurrent Entity Network (EntNet). It is
equipped with a dynamic long-term memory which allows it to maintain and update
a representation of the state of the world as it receives new data. For
language understanding tasks, it can reason on-the-fly as it reads text, not
just when it is required to answer a question or respond as is the case for a
Memory Network (Sukhbaatar et al., 2015). Like a Neural Turing Machine or
Differentiable Neural Computer (Graves et al., 2014; 2016) it maintains a fixed
size memory and can learn to perform location and content-based read and write
operations. However, unlike those models it has a simple parallel architecture
in which several memory locations can be updated simultaneously. The EntNet
sets a new state-of-the-art on the bAbI tasks, and is the first method to solve
all the tasks in the 10k training examples setting. We also demonstrate that it
can solve a reasoning task which requires a large number of supporting facts,
which other methods are not able to solve, and can generalize past its training
horizon. It can also be practically used on large scale datasets such as
Children's Book Test, where it obtains competitive performance, reading the
story in a single pass. | http://arxiv.org/pdf/1612.03969 | Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, Yann LeCun | cs.CL | null | ICLR 2017 | cs.CL | 20161212 | 20170510 | [
{
"id": "1503.01007"
},
{
"id": "1606.03126"
},
{
"id": "1605.07427"
}
] |
1612.03651 | 49 | full,nodict NPQ NPQ NPQ NPQ NPQ NPQ NPQ NPQ x x x x co AG Amz. f. Amz. p. DBP Sogou Yah. Yelp f. Yelp p. 92.1 34M 59.8 78M 94.5 83M 98.4 56M 96.3 42M 72.2 91M 63.7 48M 95.6 46M 200K 91.9 1.4M 59.6 1.4M 94.3 1.4M 98.4 1.4M 96.5 1.4M 71.5 1.4M 63.2 1.4M 95.5 1.4M 200K 92.2 830K 59.3 830K 94.1 830K 98.4 830K 96.5 830K 70.7 830K 63.0 830K 95.5 830K 100K 91.6 693K 59.5 693K 94.3 693K 98.4 694K 96.6 693K 71.1 694K 63.2 693K 95.6 693K 100K 91.8 420K 59.1 420K 93.9 420K 98.4 420K 96.5 420K 70.6 420K 62.8 420K 95.3 420K 50K 91.6 352K 59.6 352K 94.3 352K 98.4 | 1612.03651#49 | FastText.zip: Compressing text classification models | We consider the problem of producing compact architectures for text
classification, such that the full model fits in a limited amount of memory.
After considering different solutions inspired by the hashing literature, we
propose a method built upon product quantization to store word embeddings.
While the original technique leads to a loss in accuracy, we adapt this method
to circumvent quantization artefacts. Our experiments carried out on several
benchmarks show that our approach typically requires two orders of magnitude
less memory than fastText while being only slightly inferior with respect to
accuracy. As a result, it outperforms the state of the art by a good margin in
terms of the compromise between memory usage and accuracy. | http://arxiv.org/pdf/1612.03651 | Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Hérve Jégou, Tomas Mikolov | cs.CL, cs.LG | Submitted to ICLR 2017 | null | cs.CL | 20161212 | 20161212 | [
{
"id": "1510.03009"
},
{
"id": "1607.01759"
},
{
"id": "1602.02830"
},
{
"id": "1602.00367"
},
{
"id": "1512.04906"
},
{
"id": "1609.04309"
},
{
"id": "1609.06753"
}
] |
1612.03969 | 49 | Task MemN2N EntNet-tied EntNet 1: 1 supporting fact 2: 2 supporting facts 3: 3 supporting facts 4: 2 argument relations 5: 3 argument relations 6: yes/no questions 7: counting 8: lists/sets 9: simple negation 10: indeï¬nite knowledge 11: basic coreference 12: conjunction 13: compound coreference 14: time reasoning 15: basic deduction 16: basic induction 17: positional reasoning 18: size reasoning 19: path ï¬nding 20: agentâs motivation 0 0.6 7 32.6 10.2 0.2 10.6 2.6 0.3 0.5 0 0 0 0.1 11.4 52.9 39.3 40.5 74.4 0 0 3.0 9.6 33.8 1.7 0 0.5 0.1 0 0 0.3 0 0.2 6.2 12.5 46.5 40.5 44.2 75.1 0 0 1.2 9.0 31.8 3.5 0 0.5 0.3 0 0 0 0 0.4 0.1 12.1 0 40.5 45.7 74.0 0 Failed Tasks (> 5%): Mean Error: 9 15.6 8 13.7 6 10.9 | 1612.03969#49 | Tracking the World State with Recurrent Entity Networks | We introduce a new model, the Recurrent Entity Network (EntNet). It is
equipped with a dynamic long-term memory which allows it to maintain and update
a representation of the state of the world as it receives new data. For
language understanding tasks, it can reason on-the-fly as it reads text, not
just when it is required to answer a question or respond as is the case for a
Memory Network (Sukhbaatar et al., 2015). Like a Neural Turing Machine or
Differentiable Neural Computer (Graves et al., 2014; 2016) it maintains a fixed
size memory and can learn to perform location and content-based read and write
operations. However, unlike those models it has a simple parallel architecture
in which several memory locations can be updated simultaneously. The EntNet
sets a new state-of-the-art on the bAbI tasks, and is the first method to solve
all the tasks in the 10k training examples setting. We also demonstrate that it
can solve a reasoning task which requires a large number of supporting facts,
which other methods are not able to solve, and can generalize past its training
horizon. It can also be practically used on large scale datasets such as
Children's Book Test, where it obtains competitive performance, reading the
story in a single pass. | http://arxiv.org/pdf/1612.03969 | Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, Yann LeCun | cs.CL | null | ICLR 2017 | cs.CL | 20161212 | 20170510 | [
{
"id": "1503.01007"
},
{
"id": "1606.03126"
},
{
"id": "1605.07427"
}
] |
1612.03969 | 50 | 13
Published as a conference paper at ICLR 2017
Table 6: Results on bAbI Tasks with 1k samples.
Task 1: 1 supporting fact 2: 2 supporting facts 3: 3 supporting facts 4: 2 argument relations 5: 3 argument relations 6: yes/no questions 7: counting 8: lists/sets 9: simple negation 10: indeï¬nite knowledge 11: basic coreference 12: conjunction 13: compound coreference 14: time reasoning 15: basic deduction 16: basic induction 17: positional reasoning 18: size reasoning 19: path ï¬nding 20: agentâs motivation 0 8.3 40.3 2.8 13.1 7.6 17.3 10.0 13.2 15.1 0.9 0.2 0.4 1.7 0 1.3 51.0 11.1 82.8 0 0.7 56.4 69.7 1.4 4.6 30.0 22.3 19.2 31.5 15.6 8.0 0.8 9.0 62.9 57.8 53.2 46.4 8.8 90.4 2.6 Failed Tasks (> 5%): Mean Error: 11 13.9 15 29.6
# MemN2N EntNet | 1612.03969#50 | Tracking the World State with Recurrent Entity Networks | We introduce a new model, the Recurrent Entity Network (EntNet). It is
equipped with a dynamic long-term memory which allows it to maintain and update
a representation of the state of the world as it receives new data. For
language understanding tasks, it can reason on-the-fly as it reads text, not
just when it is required to answer a question or respond as is the case for a
Memory Network (Sukhbaatar et al., 2015). Like a Neural Turing Machine or
Differentiable Neural Computer (Graves et al., 2014; 2016) it maintains a fixed
size memory and can learn to perform location and content-based read and write
operations. However, unlike those models it has a simple parallel architecture
in which several memory locations can be updated simultaneously. The EntNet
sets a new state-of-the-art on the bAbI tasks, and is the first method to solve
all the tasks in the 10k training examples setting. We also demonstrate that it
can solve a reasoning task which requires a large number of supporting facts,
which other methods are not able to solve, and can generalize past its training
horizon. It can also be practically used on large scale datasets such as
Children's Book Test, where it obtains competitive performance, reading the
story in a single pass. | http://arxiv.org/pdf/1612.03969 | Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, Yann LeCun | cs.CL | null | ICLR 2017 | cs.CL | 20161212 | 20170510 | [
{
"id": "1503.01007"
},
{
"id": "1606.03126"
},
{
"id": "1605.07427"
}
] |
1612.03651 | 51 | Table 8: Comparison with and without Bloom ï¬lters. For NPQ, we set d = 8 and k = 2.
12
# Under review as a conference paper at ICLR 2017
Model k norm retrain Acc. Size full 45.4 12G 128 Input 128 Input 128 Input 128 Input+Output 128 Input+Output 128 Input+Output, co=2M Input+Output, n co=1M 128 x x x x x x x x x x 45.0 45.3 45.5 45.2 45.4 45.5 43.9 1.7G 1.8G 1.8G 1.5G 1.5G 305M 179M Input Input Input Input+Output Input+Output Input+Output, co=2M Input+Output, co=1M Input+Output, co=2M Input+Output, co=1M 64 64 64 64 64 64 64 64 64 x x x x x x x x x x x 44.0 44.7 44.9 44.6 44.8 42.5 39.9 45.0 43.4 1.1G 1.1G 1.1G 784M 784M 183M 118M 183M 118M x x
Table 9: FlickrTag: Comparison for a large dataset of (i) different quantization methods and param- eters, (ii) with or without re-training.
13 | 1612.03651#51 | FastText.zip: Compressing text classification models | We consider the problem of producing compact architectures for text
classification, such that the full model fits in a limited amount of memory.
After considering different solutions inspired by the hashing literature, we
propose a method built upon product quantization to store word embeddings.
While the original technique leads to a loss in accuracy, we adapt this method
to circumvent quantization artefacts. Our experiments carried out on several
benchmarks show that our approach typically requires two orders of magnitude
less memory than fastText while being only slightly inferior with respect to
accuracy. As a result, it outperforms the state of the art by a good margin in
terms of the compromise between memory usage and accuracy. | http://arxiv.org/pdf/1612.03651 | Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Hérve Jégou, Tomas Mikolov | cs.CL, cs.LG | Submitted to ICLR 2017 | null | cs.CL | 20161212 | 20161212 | [
{
"id": "1510.03009"
},
{
"id": "1607.01759"
},
{
"id": "1602.02830"
},
{
"id": "1602.00367"
},
{
"id": "1512.04906"
},
{
"id": "1609.04309"
},
{
"id": "1609.06753"
}
] |
1612.03969 | 52 | All Seeds Best Seed DNC EntNet 0 0.4 1.8 0 0.8 0 0.6 0.3 0.2 0.2 0 0 0 0.4 0 55.1 12.0 0.8 3.9 0 2 3.8 Task 1: 1 supporting fact 2: 2 supporting facts 3: 3 supporting facts 4: 2 argument relations 5: 3 argument relations 6: yes/no questions 7: counting 8: lists/sets 9: simple negation 10: indeï¬nite knowledge 11: basic coreference 12: conjunction 13: compound coreference 14: time reasoning 15: basic deduction 16: basic induction 17: positional reasoning 18: size reasoning 19: path ï¬nding 20: agentâs motivation Failed Tasks (> 5%): Mean Error: DNC 9.0 ± 12.6 39.2 ± 20.5 39.6 ± 16.4 0.4 ± 0.7 1.5 ± 1.0 6.9 ± 7.5 9.8 ± 7.0 5.5 ± 5.9 7.7 ± 8.3 9.6 ± 11.4 3.3 ± 5.7 5.0 ± 6.3 3.1 ± 3.6 11.0 ± 7.5 27.2 ± 20.1 53.6 | 1612.03969#52 | Tracking the World State with Recurrent Entity Networks | We introduce a new model, the Recurrent Entity Network (EntNet). It is
equipped with a dynamic long-term memory which allows it to maintain and update
a representation of the state of the world as it receives new data. For
language understanding tasks, it can reason on-the-fly as it reads text, not
just when it is required to answer a question or respond as is the case for a
Memory Network (Sukhbaatar et al., 2015). Like a Neural Turing Machine or
Differentiable Neural Computer (Graves et al., 2014; 2016) it maintains a fixed
size memory and can learn to perform location and content-based read and write
operations. However, unlike those models it has a simple parallel architecture
in which several memory locations can be updated simultaneously. The EntNet
sets a new state-of-the-art on the bAbI tasks, and is the first method to solve
all the tasks in the 10k training examples setting. We also demonstrate that it
can solve a reasoning task which requires a large number of supporting facts,
which other methods are not able to solve, and can generalize past its training
horizon. It can also be practically used on large scale datasets such as
Children's Book Test, where it obtains competitive performance, reading the
story in a single pass. | http://arxiv.org/pdf/1612.03969 | Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, Yann LeCun | cs.CL | null | ICLR 2017 | cs.CL | 20161212 | 20170510 | [
{
"id": "1503.01007"
},
{
"id": "1606.03126"
},
{
"id": "1605.07427"
}
] |
1612.03969 | 53 | 3.3 ± 5.7 5.0 ± 6.3 3.1 ± 3.6 11.0 ± 7.5 27.2 ± 20.1 53.6 ± 1.9 32.4 ± 8.0 4.2 ± 1.8 64.6 ± 37.4 0.0 ± 0.1 11.2 ± 5.4 16.7 ± 7.6 EntNet 0 ± 0.1 15.3 ± 15.7 29.3 ± 26.3 0.1 ± 0.1 0.4 ± 0.3 0.6 ± 0.8 1.8 ± 1.1 1.5 ± 1.2 0 ± 0.1 0.1 ± 0.2 0.2 ± 0.2 0 ± 0 0 ± 0.1 7.3 ± 4.5 3.6 ± 8.1 53.3 ± 1.2 8.8 ± 3.8 1.3 ± 0.9 70.4 ± 6.1 0 ± 0 5 ± 1.2 9.7 ± 2.6 0.1 2.8 10.6 0 0.4 0.3 0.8 0.1 0 0 0 0 0 3.6 0 52.1 11.7 2.1 63.0 0 4 7.38 | 1612.03969#53 | Tracking the World State with Recurrent Entity Networks | We introduce a new model, the Recurrent Entity Network (EntNet). It is
equipped with a dynamic long-term memory which allows it to maintain and update
a representation of the state of the world as it receives new data. For
language understanding tasks, it can reason on-the-fly as it reads text, not
just when it is required to answer a question or respond as is the case for a
Memory Network (Sukhbaatar et al., 2015). Like a Neural Turing Machine or
Differentiable Neural Computer (Graves et al., 2014; 2016) it maintains a fixed
size memory and can learn to perform location and content-based read and write
operations. However, unlike those models it has a simple parallel architecture
in which several memory locations can be updated simultaneously. The EntNet
sets a new state-of-the-art on the bAbI tasks, and is the first method to solve
all the tasks in the 10k training examples setting. We also demonstrate that it
can solve a reasoning task which requires a large number of supporting facts,
which other methods are not able to solve, and can generalize past its training
horizon. It can also be practically used on large scale datasets such as
Children's Book Test, where it obtains competitive performance, reading the
story in a single pass. | http://arxiv.org/pdf/1612.03969 | Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, Yann LeCun | cs.CL | null | ICLR 2017 | cs.CL | 20161212 | 20170510 | [
{
"id": "1503.01007"
},
{
"id": "1606.03126"
},
{
"id": "1605.07427"
}
] |
1612.03144 | 1 | 1Facebook AI Research (FAIR) 2Cornell University and Cornell Tech
# Abstract
Feature pyramids are a basic component in recognition systems for detecting objects at different scales. But recent deep learning object detectors have avoided pyramid rep- resentations, in part because they are compute and memory intensive. In this paper, we exploit the inherent multi-scale, pyramidal hierarchy of deep convolutional networks to con- struct feature pyramids with marginal extra cost. A top- down architecture with lateral connections is developed for building high-level semantic feature maps at all scales. This architecture, called a Feature Pyramid Network (FPN), shows signiï¬cant improvement as a generic feature extrac- tor in several applications. Using FPN in a basic Faster R-CNN system, our method achieves state-of-the-art single- model results on the COCO detection benchmark without bells and whistles, surpassing all existing single-model en- tries including those from the COCO 2016 challenge win- ners. In addition, our method can run at 6 FPS on a GPU and thus is a practical and accurate solution to multi-scale object detection. Code will be made publicly available.
predict] predict (a) Featurized image pyramid (b) Single feature map predict] predict â> [predict predict] i (c) Pyramidal feature hierarchy (d) Feature Pyramid Network | 1612.03144#1 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.03144 | 2 | Figure 1. (a) Using an image pyramid to build a feature pyramid. Features are computed on each of the image scales independently, which is slow. (b) Recent detection systems have opted to use only single scale features for faster detection. (c) An alternative is to reuse the pyramidal feature hierarchy computed by a ConvNet as if it were a featurized image pyramid. (d) Our proposed Feature Pyramid Network (FPN) is fast like (b) and (c), but more accurate. In this ï¬gure, feature maps are indicate by blue outlines and thicker outlines denote semantically stronger features.
# 1. Introduction | 1612.03144#2 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.03144 | 3 | # 1. Introduction
Recognizing objects at vastly different scales is a fun- damental challenge in computer vision. Feature pyramids built upon image pyramids (for short we call these featur- ized image pyramids) form the basis of a standard solution [1] (Fig. 1(a)). These pyramids are scale-invariant in the sense that an objectâs scale change is offset by shifting its level in the pyramid. Intuitively, this property enables a model to detect objects across a large range of scales by scanning the model over both positions and pyramid levels. Featurized image pyramids were heavily used in the era of hand-engineered features [5, 25]. They were so critical that object detectors like DPM [7] required dense scale sampling to achieve good results (e.g., 10 scales per octave). For recognition tasks, engineered features have | 1612.03144#3 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.03144 | 4 | largely been replaced with features computed by deep con- volutional networks (ConvNets) [19, 20]. Aside from being capable of representing higher-level semantics, ConvNets are also more robust to variance in scale and thus facilitate recognition from features computed on a single input scale [15, 11, 29] (Fig. 1(b)). But even with this robustness, pyra- mids are still needed to get the most accurate results. All re- cent top entries in the ImageNet [33] and COCO [21] detec- tion challenges use multi-scale testing on featurized image pyramids (e.g., [16, 35]). The principle advantage of fea- turizing each level of an image pyramid is that it produces a multi-scale feature representation in which all levels are semantically strong, including the high-resolution levels.
Nevertheless, featurizing each level of an image pyra- mid has obvious limitations. Inference time increases con- siderably (e.g., by four times [11]), making this approach impractical for real applications. Moreover, training deep
1 | 1612.03144#4 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.03144 | 5 | 1
networks end-to-end on an image pyramid is infeasible in terms of memory, and so, if exploited, image pyramids are used only at test time [15, 11, 16, 35], which creates an inconsistency between train/test-time inference. For these reasons, Fast and Faster R-CNN [11, 29] opt to not use fea- turized image pyramids under default settings.
However, image pyramids are not the only way to com- pute a multi-scale feature representation. A deep ConvNet computes a feature hierarchy layer by layer, and with sub- sampling layers the feature hierarchy has an inherent multi- scale, pyramidal shape. This in-network feature hierarchy produces feature maps of different spatial resolutions, but introduces large semantic gaps caused by different depths. The high-resolution maps have low-level features that harm their representational capacity for object recognition. | 1612.03144#5 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.03144 | 6 | The Single Shot Detector (SSD) [22] is one of the ï¬rst attempts at using a ConvNetâs pyramidal feature hierarchy as if it were a featurized image pyramid (Fig. 1(c)). Ideally, the SSD-style pyramid would reuse the multi-scale feature maps from different layers computed in the forward pass and thus come free of cost. But to avoid using low-level features SSD foregoes reusing already computed layers and instead builds the pyramid starting from high up in the net- work (e.g., conv4 3 of VGG nets [36]) and then by adding several new layers. Thus it misses the opportunity to reuse the higher-resolution maps of the feature hierarchy. We show that these are important for detecting small objects. | 1612.03144#6 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.03144 | 7 | The goal of this paper is to naturally leverage the pyra- midal shape of a ConvNetâs feature hierarchy while cre- ating a feature pyramid that has strong semantics at all scales. To achieve this goal, we rely on an architecture that combines low-resolution, semantically strong features with high-resolution, semantically weak features via a top-down pathway and lateral connections (Fig. 1(d)). The result is a feature pyramid that has rich semantics at all levels and is built quickly from a single input image scale. In other words, we show how to create in-network feature pyramids that can be used to replace featurized image pyramids with- out sacriï¬cing representational power, speed, or memory.
Similar architectures adopting top-down and skip con- nections are popular in recent research [28, 17, 8, 26]. Their goals are to produce a single high-level feature map of a ï¬ne resolution on which the predictions are to be made (Fig. 2 top). On the contrary, our method leverages the architecture as a feature pyramid where predictions (e.g., object detec- tions) are independently made on each level (Fig. 2 bottom). Our model echoes a featurized image pyramid, which has not been explored in these works. | 1612.03144#7 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.03144 | 8 | We evaluate our method, called a Feature Pyramid Net- work (FPN), in various systems for detection and segmen- tation [11, 29, 27]. Without bells and whistles, we re- port a state-of-the-art single-model result on the challenging COCO detection benchmark [21] simply based on FPN and
2
Y / i ye redict t Pp >| predict predict predict
Figure 2. Top: a top-down architecture with skip connections, where predictions are made on the ï¬nest level (e.g., [28]). Bottom: our model that has a similar structure but leverages it as a feature pyramid, with predictions made independently at all levels. | 1612.03144#8 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.03144 | 9 | a basic Faster R-CNN detector [29], surpassing all exist- ing heavily-engineered single-model entries of competition winners. In ablation experiments, we ï¬nd that for bound- ing box proposals, FPN signiï¬cantly increases the Average Recall (AR) by 8.0 points; for object detection, it improves the COCO-style Average Precision (AP) by 2.3 points and PASCAL-style AP by 3.8 points, over a strong single-scale baseline of Faster R-CNN on ResNets [16]. Our method is also easily extended to mask proposals and improves both instance segmentation AR and speed over state-of-the-art methods that heavily depend on image pyramids.
In addition, our pyramid structure can be trained end-to- end with all scales and is used consistently at train/test time, which would be memory-infeasible using image pyramids. As a result, FPNs are able to achieve higher accuracy than all existing state-of-the-art methods. Moreover, this im- provement is achieved without increasing testing time over the single-scale baseline. We believe these advances will facilitate future research and applications. Our code will be made publicly available.
# 2. Related Work | 1612.03144#9 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.03144 | 10 | # 2. Related Work
Hand-engineered features and early neural networks. SIFT features [25] were originally extracted at scale-space extrema and used for feature point matching. HOG fea- tures [5], and later SIFT features as well, were computed densely over entire image pyramids. These HOG and SIFT pyramids have been used in numerous works for image classiï¬cation, object detection, human pose estimation, and more. There has also been signiï¬cant interest in comput- ing featurized image pyramids quickly. Doll´ar et al. [6] demonstrated fast pyramid computation by ï¬rst computing a sparsely sampled (in scale) pyramid and then interpolat- ing missing levels. Before HOG and SIFT, early work on face detection with ConvNets [38, 32] computed shallow networks over image pyramids to detect faces across scales. | 1612.03144#10 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.03144 | 11 | Deep ConvNet object detectors. With the development of modern deep ConvNets [19], object detectors like Over- Feat [34] and R-CNN [12] showed dramatic improvements in accuracy. OverFeat adopted a strategy similar to early neural network face detectors by applying a ConvNet as a sliding window detector on an image pyramid. R-CNN adopted a region proposal-based strategy [37] in which each proposal was scale-normalized before classifying with a ConvNet. SPPnet [15] demonstrated that such region-based detectors could be applied much more efï¬ciently on fea- ture maps extracted on a single image scale. Recent and more accurate detection methods like Fast R-CNN [11] and Faster R-CNN [29] advocate using features computed from a single scale, because it offers a good trade-off between accuracy and speed. Multi-scale detection, however, still performs better, especially for small objects. | 1612.03144#11 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.03144 | 12 | Methods using multiple layers. A number of recent ap- proaches improve detection and segmentation by using dif- ferent layers in a ConvNet. FCN [24] sums partial scores for each category over multiple scales to compute semantic segmentations. Hypercolumns [13] uses a similar method for object instance segmentation. Several other approaches (HyperNet [18], ParseNet [23], and ION [2]) concatenate features of multiple layers before computing predictions, which is equivalent to summing transformed features. SSD [22] and MS-CNN [3] predict objects at multiple layers of the feature hierarchy without combining features or scores. There are recent methods exploiting lateral/skip connec- tions that associate low-level feature maps across resolu- tions and semantic levels, including U-Net [31] and Sharp- Mask [28] for segmentation, Recombinator networks [17] for face detection, and Stacked Hourglass networks [26] for keypoint estimation. Ghiasi et al. [8] present a Lapla- cian pyramid presentation for FCNs to progressively reï¬ne segmentation. Although these methods adopt architectures with pyramidal shapes, they are unlike | 1612.03144#12 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.03144 | 14 | # 3. Feature Pyramid Networks
Our goal is to leverage a ConvNetâs pyramidal feature hierarchy, which has semantics from low to high levels, and build a feature pyramid with high-level semantics through- out. The resulting Feature Pyramid Network is general- purpose and in this paper we focus on sliding window pro- posers (Region Proposal Network, RPN for short) [29] and region-based detectors (Fast R-CNN) [11]. We also gener- alize FPNs to instance segmentation proposals in Sec. 6.
Our method takes a single-scale image of an arbitrary size as input, and outputs proportionally sized feature maps
3
[predict â > 1x1 conv -*
Figure 3. A building block illustrating the lateral connection and the top-down pathway, merged by addition.
at multiple levels, in a fully convolutional fashion. This pro- cess is independent of the backbone convolutional architec- tures (e.g., [19, 36, 16]), and in this paper we present results using ResNets [16]. The construction of our pyramid in- volves a bottom-up pathway, a top-down pathway, and lat- eral connections, as introduced in the following. | 1612.03144#14 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.03144 | 15 | Bottom-up pathway. The bottom-up pathway is the feed- forward computation of the backbone ConvNet, which com- putes a feature hierarchy consisting of feature maps at sev- eral scales with a scaling step of 2. There are often many layers producing output maps of the same size and we say these layers are in the same network stage. For our feature pyramid, we deï¬ne one pyramid level for each stage. We choose the output of the last layer of each stage as our ref- erence set of feature maps, which we will enrich to create our pyramid. This choice is natural since the deepest layer of each stage should have the strongest features.
Speciï¬cally, for ResNets [16] we use the feature activa- tions output by each stageâs last residual block. We denote the output of these last residual blocks as {C2, C3, C4, C5} for conv2, conv3, conv4, and conv5 outputs, and note that they have strides of {4, 8, 16, 32} pixels with respect to the input image. We do not include conv1 into the pyramid due to its large memory footprint. | 1612.03144#15 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.03144 | 16 | Top-down pathway and lateral connections. The top- down pathway hallucinates higher resolution features by upsampling spatially coarser, but semantically stronger, fea- ture maps from higher pyramid levels. These features are then enhanced with features from the bottom-up pathway via lateral connections. Each lateral connection merges fea- ture maps of the same spatial size from the bottom-up path- way and the top-down pathway. The bottom-up feature map is of lower-level semantics, but its activations are more ac- curately localized as it was subsampled fewer times. | 1612.03144#16 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.03144 | 17 | Fig. 3 shows the building block that constructs our top- down feature maps. With a coarser-resolution feature map, we upsample the spatial resolution by a factor of 2 (using nearest neighbor upsampling for simplicity). The upsampled map is then merged with the corresponding bottom-up map (which undergoes a 1Ã1 convolutional layer to reduce channel dimensions) by element-wise addition. This pro- cess is iterated until the ï¬nest resolution map is generated. To start the iteration, we simply attach a 1Ã1 convolutional layer on C5 to produce the coarsest resolution map. Fi- nally, we append a 3Ã3 convolution on each merged map to generate the ï¬nal feature map, which is to reduce the alias- ing effect of upsampling. This ï¬nal set of feature maps is called {P2, P3, P4, P5}, corresponding to {C2, C3, C4, C5} that are respectively of the same spatial sizes. | 1612.03144#17 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.03144 | 18 | Because all levels of the pyramid use shared classi- ï¬ers/regressors as in a traditional featurized image pyramid, we ï¬x the feature dimension (numbers of channels, denoted as d) in all the feature maps. We set d = 256 in this pa- per and thus all extra convolutional layers have 256-channel outputs. There are no non-linearities in these extra layers, which we have empirically found to have minor impacts.
Simplicity is central to our design and we have found that our model is robust to many design choices. We have exper- imented with more sophisticated blocks (e.g., using multi- layer residual blocks [16] as the connections) and observed marginally better results. Designing better connection mod- ules is not the focus of this paper, so we opt for the simple design described above.
# 4. Applications
Our method is a generic solution for building feature pyramids inside deep ConvNets. In the following we adopt our method in RPN [29] for bounding box proposal gen- eration and in Fast R-CNN [11] for object detection. To demonstrate the simplicity and effectiveness of our method, we make minimal modiï¬cations to the original systems of [29, 11] when adapting them to our feature pyramid.
# 4.1. Feature Pyramid Networks for RPN | 1612.03144#18 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.03144 | 19 | # 4.1. Feature Pyramid Networks for RPN
RPN [29] is a sliding-window class-agnostic object de- tector. In the original RPN design, a small subnetwork is evaluated on dense 3Ã3 sliding windows, on top of a single- scale convolutional feature map, performing object/non- object binary classiï¬cation and bounding box regression. This is realized by a 3Ã3 convolutional layer followed by two sibling 1Ã1 convolutions for classiï¬cation and regres- sion, which we refer to as a network head. The object/non- object criterion and bounding box regression target are de- ï¬ned with respect to a set of reference boxes called anchors [29]. The anchors are of multiple pre-deï¬ned scales and aspect ratios in order to cover objects of different shapes.
We adapt RPN by replacing the single-scale feature map with our FPN. We attach a head of the same design (3Ã3 conv and two sibling 1Ã1 convs) to each level on our feature pyramid. Because the head slides densely over all locations in all pyramid levels, it is not necessary to have multi-scale
4 | 1612.03144#19 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.03144 | 21 | We assign training labels to the anchors based on their Intersection-over-Union (IoU) ratios with ground-truth bounding boxes as in [29]. Formally, an anchor is assigned a positive label if it has the highest IoU for a given ground- truth box or an IoU over 0.7 with any ground-truth box, and a negative label if it has IoU lower than 0.3 for all ground-truth boxes. Note that scales of ground-truth boxes are not explicitly used to assign them to the levels of the pyramid; instead, ground-truth boxes are associated with anchors, which have been assigned to pyramid levels. As such, we introduce no extra rules in addition to those in [29]. We note that the parameters of the heads are shared across all feature pyramid levels; we have also evaluated the alternative without sharing parameters and observed similar accuracy. The good performance of sharing parameters in- dicates that all levels of our pyramid share similar semantic levels. This advantage is analogous to that of using a fea- turized image pyramid, where a common head classiï¬er can be applied to features computed at any image scale.
With the above adaptations, RPN can be naturally trained and tested with our FPN, in the same fashion as in [29]. We elaborate on the implementation details in the experiments.
# 4.2. Feature Pyramid Networks for Fast R-CNN | 1612.03144#21 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.03144 | 22 | # 4.2. Feature Pyramid Networks for Fast R-CNN
Fast R-CNN [11] is a region-based object detector in which Region-of-Interest (RoI) pooling is used to extract features. Fast R-CNN is most commonly performed on a single-scale feature map. To use it with our FPN, we need to assign RoIs of different scales to the pyramid levels.
We view our feature pyramid as if it were produced from an image pyramid. Thus we can adapt the assignment strat- egy of region-based detectors [15, 11] in the case when they are run on image pyramids. Formally, we assign an RoI of width w and height h (on the input image to the network) to the level Pk of our feature pyramid by:
â
k = [ko + logy(Vwh/224)|. (1) | 1612.03144#22 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.03144 | 23 | â
k = [ko + logy(Vwh/224)|. (1)
Here 224 is the canonical ImageNet pre-training size, and k0 is the target level on which an RoI with w à h = 2242 should be mapped into. Analogous to the ResNet-based Faster R-CNN system [16] that uses C4 as the single-scale feature map, we set k0 to 4. Intuitively, Eqn. (1) means that if the RoIâs scale becomes smaller (say, 1/2 of 224), it should be mapped into a ï¬ner-resolution level (say, k = 3).
1Here we introduce P6 only for covering a larger anchor scale of 5122. P6 is simply a stride two subsampling of P5. P6 is not used by the Fast R-CNN detector in the next section. | 1612.03144#23 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.03144 | 24 | We attach predictor heads (in Fast R-CNN the heads are class-speciï¬c classiï¬ers and bounding box regressors) to all RoIs of all levels. Again, the heads all share parameters, regardless of their levels. In [16], a ResNetâs conv5 lay- ers (a 9-layer deep subnetwork) are adopted as the head on top of the conv4 features, but our method has already har- nessed conv5 to construct the feature pyramid. So unlike [16], we simply adopt RoI pooling to extract 7Ã7 features, and attach two hidden 1,024-d fully-connected (fc) layers (each followed by ReLU) before the ï¬nal classiï¬cation and bounding box regression layers. These layers are randomly initialized, as there are no pre-trained fc layers available in ResNets. Note that compared to the standard conv5 head, our 2-fc MLP head is lighter weight and faster.
Based on these adaptations, we can train and test Fast R- CNN on top of the feature pyramid. Implementation details are given in the experimental section.
# 5. Experiments on Object Detection | 1612.03144#24 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.03144 | 25 | Based on these adaptations, we can train and test Fast R- CNN on top of the feature pyramid. Implementation details are given in the experimental section.
# 5. Experiments on Object Detection
We perform experiments on the 80 category COCO de- tection dataset [21]. We train using the union of 80k train images and a 35k subset of val images (trainval35k [2]), and report ablations on a 5k subset of val images (minival). We also report ï¬nal results on the standard test set (test-std) [21] which has no disclosed labels.
As is common practice [12], all network backbones are pre-trained on the ImageNet1k classiï¬cation set [33] and then ï¬ne-tuned on the detection dataset. We use the pre-trained ResNet-50 and ResNet-101 models that are publicly available.2 Our code is a reimplementation of py-faster-rcnn3 using Caffe2.4
# 5.1. Region Proposal with RPN
We evaluate the COCO-style Average Recall (AR) and AR on small, medium, and large objects (ARs, ARm, and ARl) following the deï¬nitions in [21]. We report results for 100 and 1000 proposals per images (AR100 and AR1k). | 1612.03144#25 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.03144 | 26 | Implementation details. All architectures in Table 1 are trained end-to-end. The input image is resized such that its shorter side has 800 pixels. We adopt synchronized SGD training on 8 GPUs. A mini-batch involves 2 images per GPU and 256 anchors per image. We use a weight decay of 0.0001 and a momentum of 0.9. The learning rate is 0.02 for the ï¬rst 30k mini-batches and 0.002 for the next 10k. For all RPN experiments (including baselines), we include the anchor boxes that are outside the image for training, which is unlike [29] where these anchor boxes are ignored. Other implementation details are as in [29]. Training RPN with FPN on 8 GPUs takes about 8 hours on COCO.
2https://github.com/kaiminghe/deep-residual-networks 3https://github.com/rbgirshick/py-faster-rcnn 4https://github.com/caffe2/caffe2
5
# 5.1.1 Ablation Experiments | 1612.03144#26 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.03144 | 27 | 5
# 5.1.1 Ablation Experiments
Comparisons with baselines. For fair comparisons with original RPNs [29], we run two baselines (Table 1(a, b)) us- ing the single-scale map of C4 (the same as [16]) or C5, both using the same hyper-parameters as ours, including using 5 scale anchors of {322, 642, 1282, 2562, 5122}. Table 1 (b) shows no advantage over (a), indicating that a single higher- level feature map is not enough because there is a trade-off between coarser resolutions and stronger semantics.
Placing FPN in RPN improves AR1k to 56.3 (Table 1 (c)), which is 8.0 points increase over the single-scale RPN baseline (Table 1 (a)). In addition, the performance on small objects (AR1k s ) is boosted by a large margin of 12.9 points. Our pyramid representation greatly improves RPNâs robust- ness to object scale variation. | 1612.03144#27 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.03144 | 28 | How important is top-down enrichment? Table 1(d) shows the results of our feature pyramid without the top- down pathway. With this modiï¬cation, the 1Ã1 lateral con- nections followed by 3Ã3 convolutions are attached to the bottom-up pyramid. This architecture simulates the effect of reusing the pyramidal feature hierarchy (Fig. 1(b)).
The results in Table 1(d) are just on par with the RPN baseline and lag far behind ours. We conjecture that this is because there are large semantic gaps between different levels on the bottom-up pyramid (Fig. 1(b)), especially for very deep ResNets. We have also evaluated a variant of Ta- ble 1(d) without sharing the parameters of the heads, but observed similarly degraded performance. This issue can- not be simply remedied by level-speciï¬c heads. | 1612.03144#28 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.03144 | 29 | How important are lateral connections? Table 1(e) shows the ablation results of a top-down feature pyramid without the 1Ã1 lateral connections. This top-down pyra- mid has strong semantic features and ï¬ne resolutions. But we argue that the locations of these features are not precise, because these maps have been downsampled and upsampled several times. More precise locations of features can be di- rectly passed from the ï¬ner levels of the bottom-up maps via the lateral connections to the top-down maps. As a results, FPN has an AR1k score 10 points higher than Table 1(e). | 1612.03144#29 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.03144 | 30 | How important are pyramid representations? Instead of resorting to pyramid representations, one can attach the head to the highest-resolution, strongly semantic feature maps of P2 (i.e., the ï¬nest level in our pyramids). Simi- lar to the single-scale baselines, we assign all anchors to the P2 feature map. This variant (Table 1(f)) is better than the baseline but inferior to our approach. RPN is a sliding win- dow detector with a ï¬xed window size, so scanning over pyramid levels can increase its robustness to scale variance. In addition, we note that using P2 alone leads to more anchors (750k, Table 1(f)) caused by its large spatial reso- lution. This result suggests that a larger number of anchors is not sufï¬cient in itself to improve accuracy. | 1612.03144#30 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.03144 | 31 | RPN feature | # anchors | lateral? top-down? | | | ARIE ARIE AR} (a) baseline on conv4 C4 47k 36.1 48.3 32.0 58.7 62.2 (b) baseline on conv5 Cs 12k 36.3 44.9 25.3 55.5 64.2 (c) FPN {Py} 200k v v 44.0 | 56.3 | 44.9 63.4 66.2 Ablation experiments follow: (d) bottom-up pyramid {Px} 200k v 374 49.5 30.5 59.9 68.0 (e) top-down pyramid, w/o lateral {Px} 200k v 34.5 46.1 265 574 64.7 (f) only finest level Pz 750k v v 38.4 51.3 | 35.1 59.7 67.6
Table 1. Bounding box proposal results using RPN [29], evaluated on the COCO minival set. All models are trained on trainval35k. The columns âlateralâ and âtop-downâ denote the presence of lateral and top-down connections, respectively. The column âfeatureâ denotes the feature maps on which the heads are attached. All results are based on ResNet-50 and share the same hyper-parameters. | 1612.03144#31 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.03144 | 32 | Fast R-CNN proposals feature | head | lateral? top-down? | [email protected] | AP | AP; AP, AP; (a) baseline on conv4 RPN, {P;,} C4 conv5S 54.7 31.9] 15.7 365 45.5 (b) baseline on conv5 RPN, {Px} Cs 2fc 52.9 28.8} 11.9 324 43.4 (c) FPN RPN, {P} | {Px} 2fc v v 56.9 33.9 | 17.8 37.7 45.8 Ablation experiments follow: (d) bottom-up pyramid RPN, {Px} | {Pe} | 2fe v 44.9 | 249] 10.9 244 38.5 (e) top-down pyramid, w/o lateral RPN, {Px} ] {Pe} | 2fc v 54.0 | 31.3] 13.3 35.2 45.3 (f) only finest level RPN, {P,.} P2 2fc v v 56.3 33.4] 17.3 37.3 45.6 | 1612.03144#32 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.03144 | 33 | Table 2. Object detection results using Fast R-CNN [11] on a ï¬xed set of proposals (RPN, {Pk}, Table 1(c)), evaluated on the COCO minival set. Models are trained on the trainval35k set. All results are based on ResNet-50 and share the same hyper-parameters.
Faster R-CNN proposals feature | head | lateral? top-down? | [email protected] | AP | AP; AP, AP; (*) baseline from He er al. [16] RPN, C4 C41 conv5 47.3 26.3 - - - (a) baseline on conv4 RPN, C4 C41 conv5 53.1 31.6 | 13.2 35.6 47.1 (b) baseline on conv5 RPN, C5 Cs 2fe 51.7 28.0) 96 31.9 43.1 (c) FPN RPN, {Px} | {Pe} 2fc v v 56.9 33.9 | 17.8 37.7 45.8
Table 3. Object detection results using Faster R-CNN [29] evaluated on the COCO minival set. The backbone network for RPN are consistent with Fast R-CNN. Models are trained on the trainval35k set and use ResNet-50. â Provided by authors of [16]. | 1612.03144#33 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
1612.03144 | 34 | # 5.2. Object Detection with Fast/Faster R-CNN
Next we investigate FPN for region-based (non-sliding window) detectors. We evaluate object detection by the COCO-style Average Precision (AP) and PASCAL-style AP (at a single IoU threshold of 0.5). We also report COCO AP on objects of small, medium, and large sizes (namely, APs, APm, and APl) following the deï¬nitions in [21].
Implementation details. The input image is resized such that its shorter side has 800 pixels. Synchronized SGD is used to train the model on 8 GPUs. Each mini-batch in- volves 2 image per GPU and 512 RoIs per image. We use a weight decay of 0.0001 and a momentum of 0.9. The learning rate is 0.02 for the ï¬rst 60k mini-batches and 0.002 for the next 20k. We use 2000 RoIs per image for training and 1000 for testing. Training Fast R-CNN with FPN takes about 10 hours on the COCO dataset.
# 5.2.1 Fast R-CNN (on ï¬xed proposals) | 1612.03144#34 | Feature Pyramid Networks for Object Detection | Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | http://arxiv.org/pdf/1612.03144 | Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie | cs.CV | null | null | cs.CV | 20161209 | 20170419 | [
{
"id": "1703.06870"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.