doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1707.07012 | 18 | # toe
Figure 3. Controller model architecture for recursively constructing one block of a convolutional cell. Each block requires selecting 5 discrete parameters, each of which corresponds to the output of a softmax layer. Example constructed block shown on right. A convolu- tional cell contains B blocks, hence the controller contains 5B softmax layers for predicting the architecture of a convolutional cell. In our experiments, the number of blocks B is 5.
Finally, our work makes use of the reinforcement learn- ing proposal in NAS [71]; however, it is also possible to use random search to search for architectures in the NAS- Net search space. In random search, instead of sampling the decisions from the softmax classiï¬ers in the controller RNN, we can sample the decisions from the uniform distri- bution. In our experiments, we ï¬nd that random search is slightly worse than reinforcement learning on the CIFAR- 10 dataset. Although there is value in using reinforcement learning, the gap is smaller than what is found in the original work of [71]. This result suggests that 1) the NASNet search space is well-constructed such that random search can per- form reasonably well and 2) random search is a difï¬cult baseline to beat. We will compare reinforcement learning against random search in Section 4.4.
# 4. Experiments and Results | 1707.07012#18 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.07012 | 19 | # 4. Experiments and Results
In this section, we describe our experiments with the method described above to learn convolutional cells. In summary, all architecture searches are performed using the CIFAR-10 classiï¬cation task [31]. The controller RNN was trained using Proximal Policy Optimization (PPO) [51] by employing a global workqueue system for generating a pool of child networks controlled by the RNN. In our experi- ments, the pool of workers in the workqueue consisted of 500 GPUs.
convolutions and the number of branches compared with competing architectures [53, 59, 20, 60, 58]. Subsequent experiments focus on this convolutional cell architecture, although we examine the efï¬cacy of other, top-ranked con- volutional cells in ImageNet experiments (described in Ap- pendix B) and report their results as well. We call the three networks constructed from the best three searches NASNet- A, NASNet-B and NASNet-C. | 1707.07012#19 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 20 | WBM TER -0.8 |-0.9 -0.9|-0.8 -0.9 -0.7/-0.6 -0.8 -0.8 -0.1}-0.2 -0.2 -0.: @ & 09 09 08/08 07/07 07 08 0.8 0.7 0.7 0.8 0.8 os 0.7/0.6 0.9 0.8/0.1 [ ) B4 0.9/0.7 /0.6 0.9 0.8/0.1 o4 0.8 0.6 0.8 0.9/0.1 02 OOO OO @'1 0s 08 08 02 0 COOCCOe§ 0.2 0.4 0.6 08 GBM RE -0.8 -0.5 -0.2/-0.7 -1 0.7 0.7 0.3/0.1 @cw 0.5 0.1/0.5 0.8/0.5 0.5 0.3/0.1 08 @ @ cn 09 09 04 06/01 03 08 06 ee @wes 08/0 0.4 0.1/0.1 0.9 os @ e@e@:: 0.5 0.7 0.3/0.2 0.7 : r Y * | 1707.06875#20 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.07012 | 20 | We demonstrate the utility of the convolutional cells by employing this learned architecture on CIFAR-10 and a family of ImageNet classiï¬cation tasks. The latter family of tasks is explored across a few orders of magnitude in com- putational budget. After having learned the convolutional cells, several hyper-parameters may be explored to build a ï¬nal network for a given task: (1) the number of cell repeats N and (2) the number of ï¬lters in the initial convolutional cell. After selecting the number of initial ï¬lters, we use a common heuristic to double the number of ï¬lters whenever the stride is 2. Finally, we deï¬ne a simple notation, e.g., 4 @ 64, to indicate these two parameters in all networks, where 4 and 64 indicate the number of cell repeats and the number of ï¬lters in the penultimate layer of the network, respectively.
The result of this search process over 4 days yields sev- eral candidate convolutional cells. We note that this search procedure is almost 7Ã faster than previous approaches [71] that took 28 days.1 Additionally, we demonstrate below that the resulting architecture is superior in accuracy.
Figure 4 shows a diagram of the top performing Normal Cell and Reduction Cell. Note the prevalence of separable | 1707.07012#20 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.07012 | 21 | Figure 4 shows a diagram of the top performing Normal Cell and Reduction Cell. Note the prevalence of separable
1In particular, we note that previous architecture search [71] used 800 GPUs for 28 days resulting in 22,400 GPU-hours. The method in this pa- per uses 500 GPUs across 4 days resulting in 2,000 GPU-hours. The for- mer effort used Nvidia K40 GPUs, whereas the current efforts used faster NVidia P100s. Discounting the fact that the we use faster hardware, we estimate that the current procedure is roughly about 7à more efï¬cient. | 1707.07012#21 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 22 | Figure 1: Spearman correlation results for TGEN on BAGEL. Bordered area shows correlations between human ratings and automatic metrics, the rest shows correlations among the metrics. Blue colour of circles indicates positive correlation, while red indicates negative correlation. The size of circles denotes the correlation strength.
Informativeness Naturainess. Quality Te 1a 4 â Pa Peo et Informativonoss. a
Figure 2: Williams test results: X represents a non-signiï¬cant difference between correlations (p < 0.05; top: WBMs, bottom: GBMs).
ness for the TGEN system on BAGEL (Ï = 0.33, p < 0.01, see Figure 1). However, the wps met- ric (amongst most others) is not robust across sys- tems and datasets: Its correlation on other datasets is very weak, (Ï â¤ .18) and its correlation with informativeness ratings of LOLS outputs is insigniï¬- cant. ⢠As a sanity check, we also measure a random score [0.0, 1.0] which proves to have a close-to- zero correlation with human ratings (highest Ï = 0.09).
# 7.2 Accuracy of Relative Rankings | 1707.06875#22 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.07012 | 22 | For complete details of of the architecture learning algo- rithm and the controller system, please refer to Appendix A. Importantly, when training NASNets, we discovered Sched- uledDropPath, a modiï¬ed version of DropPath [33], to be an effective regularization method for NASNet. In Drop- Path [33], each path in the cell is stochastically dropped with some ï¬xed probability during training. In our mod- iï¬ed version, ScheduledDropPath, each path in the cell is dropped out with a probability that is linearly increased over the course of training. We ï¬nd that DropPath does not work well for NASNets, while ScheduledDropPath signiï¬- cantly improves the ï¬nal performance of NASNets in both CIFAR and ImageNet experiments.
Bisa [concat| Normal Cell Reduction Cell | 1707.07012#22 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 23 | # 7.2 Accuracy of Relative Rankings
We now evaluate a more coarse measure, namely the metricsâ ability to predict relative human rat- ings. That is, we compute the score of each metric for two system output sentences corresponding to the same MR. The prediction of a metric is cor- rect if it orders the sentences in the same way as median human ratings (note that ties are allowed). Following previous work (Vedantam et al., 2015; Kilickaya et al., 2017), we mainly concentrate on WBMs. Results summarised in Table 4 show that most metricsâ performance is not signiï¬cantly different from that of a random score (Wilcoxon
signed rank test). While the random score ï¬uc- tuates between 25.4â44.5% prediction accuracy, the metrics achieve an accuracy of between 30.6â 49.8%. Again, the performance of the metrics is dataset-speciï¬c: Metrics perform best on BAGEL data; for SFHOTEL, metrics show mixed perfor- mance while for SFREST, metrics perform worst. | 1707.06875#23 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.07012 | 23 | Bisa [concat| Normal Cell Reduction Cell
Figure 4. Architecture of the best convolutional cells (NASNet-A) with B = 5 blocks identiï¬ed with CIFAR-10 . The input (white) is the hidden state from previous activations (or input image). The output (pink) is the result of a concatenation operation across all resulting branches. Each convolutional cell is the result of B blocks. A single block is corresponds to two primitive operations (yellow) and a combination operation (green). Note that colors correspond to operations in Figure 3.
# 4.1. Results on CIFAR-10 Image Classiï¬cation
CNNs hand-designed for this operating regime [24, 70].
For the task of image classiï¬cation with CIFAR-10, we set N = 4 or 6 (Figure 2). The test accuracies of the best architectures are reported in Table 1 along with other state-of-the-art models. As can be seen from the Table, a large NASNet-A model with cutout data augmentation [12] achieves a state-of-the-art error rate of 2.40% (averaged across 5 runs), which is slightly better than the previous best record of 2.56% by [12]. The best single run from our model achieves 2.19% error rate. | 1707.07012#23 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 24 | informat. naturalness quality BAGEL SFHOTEL raw data raw data TER, BLEU1-4, ROUGE, NIST, LEPOR, CIDEr, METEOR, SIM TER, BLEU1-4, ROUGE, LEPOR, CIDEr, METEOR, TER, BLEU1-4, ROUGE, NIST, LEPOR, CIDEr, METEOR, SIM METEOR TER, BLEU1-4, ROUGE, NIST, LEPOR, CIDEr, METEOR, SIM N/A SIM SFREST raw data SIM LEPOR N/A quant. data TER, BLEU1-4, ROUGE, NIST, LEPOR, CIDEr, N/A N/A METEOR SIM
Table 4: Metrics predicting relative human rating with signiï¬cantly higher accuracy than a random baseline. | 1707.06875#24 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.07012 | 24 | # 4.2. Results on ImageNet Image Classiï¬cation
We performed several sets of experiments on ImageNet with the best convolutional cells learned from CIFAR-10. We emphasize that we merely transfer the architectures from CIFAR-10 but train all ImageNet models weights from scratch.
Results are summarized in Table 2 and 3 and Figure 5. In the ï¬rst set of experiments, we train several image clas- siï¬cation systems operating on 299x299 or 331x331 reso- lution images with different experiments scaled in compu- tational demand to create models that are roughly on par in computational cost with Inception-v2 [29], Inception-v3 [60] and PolyNet [69]. We show that this family of mod- els achieve state-of-the-art performance with fewer ï¬oating point operations and parameters than comparable architec- tures. Second, we demonstrate that by adjusting the scale of the model we can achieve state-of-the-art performance at smaller computational budgets, exceeding streamlined
Note we do not have residual connections between con- volutional cells as the models learn skip connections on their own. We empirically found manually inserting resid- ual connections between cells to not help performance. Our training setup on ImageNet is similar to [60], but please see Appendix A for details. Table 2 shows that | 1707.07012#24 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 25 | Table 4: Metrics predicting relative human rating with signiï¬cantly higher accuracy than a random baseline.
Discussion: Our data differs from the one used in previous work (Vedantam et al., 2015; Kilick- aya et al., 2017), which uses explicit relative rank- ings (âWhich output do you prefer?â), whereas we compare two Likert-scale ratings. As such, we have 3 possible outcomes (allowing ties). This way, we can account for equally valid system outputs, which is one of the main drawbacks of forced-choice approaches (Hodosh and Hocken- maier, 2016). Our results are akin to previous work: Kilickaya et al. (2017) report results be- tween 60-74% accuracy for binary classiï¬cation on machine-machine data, which is comparable to our results for 3-way classiï¬cation. | 1707.06875#25 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.07012 | 25 | the convolutional cells discov- ered with CIFAR-10 generalize well to ImageNet prob- lems. In particular, each model based on the convolu- tional cells exceeds the predictive performance of the cor- responding hand-designed model. Importantly, the largest model achieves a new state-of-the-art performance for Ima- geNet (82.7%) based on single, non-ensembled predictions, surpassing previous best published result by â¼1.2% [8]. Among the unpublished works, our model is on par with the best reported result of 82.7% [25], while having signif- icantly fewer ï¬oating point operations. Figure 5 shows a complete summary of our results in comparison with other published results. Note the family of models based on con- volutional cells provides an envelope over a broad class of human-invented architectures. | 1707.07012#25 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 26 | Still, we observe a mismatch between the or- dinal human ratings and the continuous metrics. For example, humans might rate system A and system B both as a 6, whereas BLEU, for exam- ple, might assign 0.98 and 1.0 respectively, mean- ing that BLEU will declare system B as the win- In order to account for this mismatch, we ner. quantise our metric data to the same scale as the median scores from our human ratings.9 Applied to SFREST, where we previously got our worst re9Note that this mismatch can also be accounted for by continuous rating scales, as suggested by Belz and Kow (2011).
sults, we can see an improvement for predicting informativeness, where all WBMs now perform signiï¬cantly better than the random baseline (see Table 4). In the future, we will investigate re- lated discriminative approaches, e.g. (Hodosh and Hockenmaier, 2016; Kannan and Vinyals, 2017), where the task is simpliï¬ed to distinguishing cor- rect from incorrect output.
# 8 Error Analysis
In this section, we attempt to uncover why auto- matic metrics perform so poorly.
# 8.1 Scales | 1707.06875#26 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.07012 | 26 | Finally, we test how well the best convolutional cells may perform in a resource-constrained setting, e.g., mobile devices (Table 3). In these settings, the number of ï¬oat- ing point operations is severely constrained and predictive performance must be weighed against latency requirements on a device with limited computational resources. Mo- bileNet [24] and Shufï¬eNet [70] provide state-of-the-art re- sults obtaining 70.6% and 70.9% accuracy, respectively on | 1707.07012#26 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 27 | # 8 Error Analysis
In this section, we attempt to uncover why auto- matic metrics perform so poorly.
# 8.1 Scales
We ï¬rst explore the hypothesis that metrics are good in distinguishing extreme cases, i.e. system outputs which are rated as clearly good or bad by the human judges, but do not perform well for ut- terances rated in the middle of the Likert scale, as suggested by Kilickaya et al. (2017). We âbinâ our data into three groups: bad, which comprises low ratings (â¤2); good, comprising high ratings (â¥5); and ï¬nally a group comprising average ratings.
We ï¬nd that utterances with low human ratings of informativeness and naturalness correlate sig- niï¬cantly better (p < 0.05) with automatic metrics than those with average and good human ratings. For example, as shown in Figure 3, the correlation between WBMs and human ratings for utterances with low informativeness scores ranges between 0.3 â¤ Ï â¤ 0.5 (moderate correlation), while the highest correlation for utterances of average and high informativeness barely reaches Ï â¤ 0.2 (very weak correlation). The same pattern can be ob- served for correlations with quality and natural- ness ratings. | 1707.06875#27 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.07012 | 27 | model depth # params error rate (%) DenseNet (L = 40, k = 12) [26] DenseNet(L = 100, k = 12) [26] DenseNet (L = 100, k = 24) [26] DenseNet-BC (L = 100, k = 40) [26] 40 100 100 190 1.0M 7.0M 27.2M 25.6M 5.24 4.10 3.74 3.46 Shake-Shake 26 2x32d [18] Shake-Shake 26 2x96d [18] Shake-Shake 26 2x96d + cutout [12] 26 26 26 2.9M 26.2M 26.2M 3.55 2.86 2.56 NAS v3 [71] NAS v3 [71] 39 39 7.1M 37.4M 4.47 3.65 NASNet-A (6 @ 768) NASNet-A (6 @ 768) + cutout NASNet-A (7 @ 2304) NASNet-A (7 @ 2304) + cutout NASNet-B (4 @ 1152) NASNet-C (4 @ 640) - - - - - - 3.3M 3.3M 27.6M 27.6M 2.6M 3.1M 3.41 2.65 2.97 | 1707.07012#27 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 28 | This discrepancy in correlation results between low and other user ratings, together with the fact that the majority of system outputs are rated âgoodâ for informativeness (79%), naturalness (64%) and quality (58%), whereas low ratings do not exceed 7% in total, could explain why the overall correlations are low (Section 7) despite the observed trends in relationship between average system-level performance scores (Section 6). It also explains why the RNNLG system, which con- tains very few instances of low user ratings, shows poor correlation between human ratings and auto- matic metrics. | 1707.06875#28 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.06875 | 29 | system MR system output human reference WOMs SIM humans 1 2 3 LOLS LOLS TGEN inform(name = the donatello, hasinternet = yes) inform nomatch(area = embarcadero, kidsallowed= yes, pricerange = expensive) inform(name = X, area = riverside, eattype = restaurant, food = fastfood, pricerange = cheap) well there is a hotel with in- ternet access called the do- natello i but i but i but i but i but i but i but i but i but i x is a restaurant on the riverside called located at the riverside and at is the donatello has internet unfortunately i could not ï¬nd any expensive restaurants in embarcadero that allow kids. x is a cheap fastfood restau- rant located near the riverside 1.4 1.1 2.4 5 1 4 6 1 1 4 RNNLG inform nomatch(kidsallowed = yes, food = moroccan) i am sorry, i did not ï¬nd any restaurants that allows kids and serve moroccan. there are no restau- sorry, rants allowing kids and serv- ing moroccan food 1.85 4 5 | 1707.06875#29 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.06875 | 30 | Table 5: Example pairs of MRs and system outputs from our data, contrasting the average of word- overlap metrics (normalised in the 1-6 range) and semantic similarity (SIM) with human ratings (median of all measures).
Bad informativeness Grageyo rFoangoac INF ee NAT @@@ QUA ee 1 06 02 02 06 1
Figure 3: Correlation between automatic metrics (WBMs) and human ratings for utterances of bad informativeness (top), and average and good infor- mativeness (bottom).
# Impact of Target Data | 1707.06875#30 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.07012 | 30 | 85 NASNet-A (6 @ 4032) a â NASNet-A (7 @ 1920) oe : wer - e Ben-131 SENEt S804 vasnetsi 6 @ 1598. inception Resnetve PVN! âResNext-101 ® , - tnception-v4 Bw O° xception ResNet-152 & Incéption-v3 R24 KY f © 75) ig eptionv2 & 7 > INASNet-A (4 @ 1056) 8 vaa-16 5 ShuffieNet e 3 © Mobitenet 8 70) @ inception-vt 65 0 10000 20000 30000 40000 # Mult-Add operations (millions) 85 NASNet-A (6 @ 4032) way _â) DPN-13T âSENet NASNet-A (7 @ 1920) ee let ( ) incéption-ResNet-ve Oe @ PolyNet @ Wodienet © inception-vt = eo S80 Jnasneras @ 1508): ection. v4 ResNeXt-101 ® © re ception 5 i Inceptionv3 i ResNet-152 5 © 754 fg Iception-ve £ . > WASNet-A (4 @ 1056) 3 i VGG-16 5 ShuttieNet e 3 fs) © 70 65 0 20 40 60 80 100 120 | 1707.07012#30 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 31 | Characteristics of Data: In Section 7.1, we ob- served that datasets have a signiï¬cant impact on how well automatic metrics reï¬ect human ratings. A closer inspection shows that BAGEL data differs signiï¬cantly from SFREST and SFHOTEL, both in terms of grammatical and MR properties. BAGEL has signiï¬cantly shorter references both in terms of number of characters and words compared to the other two datasets. Although being shorter, the words in BAGEL references are signiï¬cantly more often polysyllabic. Furthermore, BAGEL only con- sists of utterances generated from inform MRs, while SFREST and SFHOTEL also have less complex MR types, such as conï¬rm, goodbye, etc. Utter- ances produced from inform MRs are signiï¬cantly longer and have a signiï¬cantly higher correlation with human ratings of informativeness and natu- ralness than non-inform utterance types. In other words, BAGEL is the most complex dataset to generate from. Even though it is more complex, met- rics perform most reliably on BAGEL here (note that the correlation is | 1707.06875#31 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.06875 | 32 | is the most complex dataset to generate from. Even though it is more complex, met- rics perform most reliably on BAGEL here (note that the correlation is still only weak). One possible explanation is that BAGEL only contains two human references per MR, whereas SFHOTEL and SFREST both contain 5.35 references per MR on average. Having more references means that WBMs natu- rally will return higher scores (âanything goesâ). This problem could possibly be solved by weight- ing multiple references according to their quality, as suggested by (Galley et al., 2015), or following a reference-less approach (Specia et al., 2010). Quality of Data: Our corpora contain crowd- sourced human references that have grammatical errors, e.g. âFifth Floor does not allow childsâ (SFREST reference). Corpus-based methods may pick up these errors, and word-based metrics will rate these system utterances as correct, whereas we can expect human judges to be sensitive to ungrammatical utterances. Note that the pars- ing score (while being a crude approximation of grammaticality) achieves one of our highest cor- relation results against human ratings, with |Ï| = | 1707.06875#32 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.07012 | 32 | Figure 5. Accuracy versus computational demand (left) and number of parameters (right) across top performing published CNN architec- tures on ImageNet 2012 ILSVRC challenge prediction task. Computational demand is measured in the number of ï¬oating-point multiply- add operations to process a single image. Black circles indicate previously published results and red squares highlight our proposed models.
224x224 images using â¼550M multliply-add operations. An architecture constructed from the best convolutional cells achieves superior predictive performance (74.0% ac- curacy) surpassing previous models but with comparable computational demand. In summary, we ï¬nd that the learned convolutional cells are ï¬exible across model scales achieving state-of-the-art performance across almost 2 or- ders of magnitude in computational budget.
# 4.3. Improved features for object detection
Image classiï¬cation networks provide generic image fea- tures that may be transferred to other computer vision problems [13]. One of the most important problems is the spa- tial localization of objects within an image. To further validate the performance of the family of NASNet-A net- works, we test whether object detection systems derived from NASNet-A lead to improvements in object detection [28]. | 1707.07012#32 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 33 | the pars- ing score (while being a crude approximation of grammaticality) achieves one of our highest cor- relation results against human ratings, with |Ï| = .31. Grammatical errors raise questions about the quality of the training data, especially when be- ing crowdsourced. For example, Belz and Reiter (2006) ï¬nd that human experts assign low rank- ings to their original corpus text. Again, weighting (Galley et al., 2015) or reference-less approaches (Specia et al., 2010) might remedy this issue. | 1707.06875#33 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.07012 | 34 | Model image size # parameters Mult-Adds Top 1 Acc. (%) Top 5 Acc. (%) Inception V2 [29] NASNet-A (5 @ 1538) 224Ã224 299Ã299 11.2 M 10.9 M 1.94 B 2.35 B 74.8 78.6 92.2 94.2 Inception V3 [60] Xception [9] Inception ResNet V2 [58] NASNet-A (7 @ 1920) 299Ã299 299Ã299 299Ã299 299Ã299 23.8 M 22.8 M 55.8 M 22.6 M 5.72 B 8.38 B 13.2 B 4.93 B 78.8 79.0 80.1 80.8 94.4 94.5 95.1 95.3 ResNeXt-101 (64 x 4d) [68] PolyNet [69] DPN-131 [8] SENet [25] NASNet-A (6 @ 4032) 320Ã320 331Ã331 320Ã320 320Ã320 331Ã331 83.6 M 92 M 79.5 M 145.8 M 88.9 M 31.5 B 34.7 B 32.0 B 42.3 B 23.8 B 80.9 81.3 81.5 82.7 82.7 95.6 95.8 95.8 96.2 96.2 | 1707.07012#34 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 35 | Dimension of human ratings Study Sentence Planning Surface Realisation Domain this paper (Reiter and Belz, 2009) (Stent et al., 2005) (Liu et al., 2016) (Elliott and Keller, 2014) (Kilickaya et al., 2017) (Cahill, 2009) (Espinosa et al., 2010) weak positive (Ï = 0.33, WPS) none weak positive (Ï = 0.47, LSA) weak positive (Ï = 0.35, BLEU-4) positive (Ï = 0.53, METEOR) positive (Ï = 0.64, SPICE) N/A weak positive (Ï = 0.43, TER) weak negative (Ï = 0. â 31, parser) strong positive (Pearsonâs r = 0.96, NIST) negative (Ï = â0.56, NIST) N/A N/A N/A negative (Ï = â0.64, ROUGE) positive (Ï = 0.62, BLEU-4) NLG, restaurant/hotel search NLG, weather forecast paraphrasing of news dialogue/Twitter pairs image caption image caption NLG, German news texts NLG, news texts | 1707.06875#35 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.07012 | 35 | Table 2. Performance of architecture search and other published state-of-the-art models on ImageNet classiï¬cation. Mult-Adds indicate the number of composite multiply-accumulate operations for a single image. Note that the composite multiple-accumulate operations are calculated for the image size reported in the table. Model size for [25] calculated from open-source implementation.
Model Inception V1 [59] MobileNet-224 [24] Shufï¬eNet (2x) [70] 6.6M 4.2 M â¼ 5M 1,448 M 569 M 524 M 69.8 â 70.6 70.9 89.9 89.5 89.8 NASNet-A (4 @ 1056) NASNet-B (4 @ 1536) NASNet-C (3 @ 960) 5.3 M 5.3M 4.9M 564 M 488 M 558 M 74.0 72.8 72.5 91.6 91.3 91.0
# parameters Mult-Adds Top 1 Acc. (%) Top 5 Acc. (%) | 1707.07012#35 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.07012 | 36 | # parameters Mult-Adds Top 1 Acc. (%) Top 5 Acc. (%)
Table 3. Performance on ImageNet classiï¬cation on a subset of models operating in a constrained computational setting, i.e., < 1.5 B multiply-accumulate operations per image. All models use 224x224 images. â indicates top-1 accuracy not reported in [59] but from open-source implementation.
Model resolution MobileNet-224 [24] Shufï¬eNet (2x) [70] NASNet-A (4 @ 1056) 600 à 600 600 à 600 600 à 600 19.8% 24.5%â 29.6% - - - ResNet-101-FPN [36] Inception-ResNet-v2 (G-RMI) [28] Inception-ResNet-v2 (TDM) [52] NASNet-A (6 @ 4032) NASNet-A (6 @ 4032) 800 (short side) 600 à 600 600 à 1000 800 à 800 1200 à 1200 - 35.7% 37.3% 41.3% 43.2% 36.2% 35.6% 36.8% 40.7% 43.1% ResNet-101-FPN (RetinaNet) [37] 800 (short side) - 39.1% | 1707.07012#36 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 37 | our three systems.10 Again, we observe differ- ent behaviour between WOMs and SIM scores. In Example 1, LOLS generates a grammatically cor- rect English sentence, which represents the mean- ing of the MR well, and, as a result, this utter- ance received high human ratings (median = 6) for informativeness, naturalness and quality. How- ever, WOMs rate this utterance low, i.e. scores of BLEU1-4, NIST, LEPOR, CIDEr, ROUGE and METEOR nor- malised into the 1-6 range all stay below 1.5. This is because the system-generated utterance has low overlap with the human/corpus references. Note that the SIM score is high (5), as it ignores human references and computes distributional semantic similarity between the MR and the system output. Examples 2 and 3 show outputs which receive low scores from both automatic metrics and humans. WOMs score these system outputs low due to lit- tle or no overlap with human references, whereas humans are sensitive to ungrammatical output and missing information (the former is partially cap- tured by GBMs). Examples 2 and 3 also illus- trate inconsistencies in human ratings since sys- tem | 1707.06875#37 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.07012 | 37 | # mAP (mini-val) mAP (test-dev)
Table 4. Object detection performance on COCO on mini-val and test-dev datasets across a variety of image featurizations. All results are with the Faster-RCNN object detection framework [47] from a single crop of an image. Top rows highlight mobile-optimized image featurizations, while bottom rows indicate computationally heavy image featurizations geared towards achieving best results. All mini-val results employ the same 8K subset of validation images in [28].
We perform single model evaluation using 300-500 RPN proposals per image. In other words, we only pass a sin- gle image through a single network. We evaluate the model on the COCO mini-val [28] and test-dev dataset and report the mean average precision (mAP) as computed with the standard COCO metric library [38]. We perform a simple search over learning rate schedules to identify the best pos- sible model. Finally, we examine the behavior of two object
detection systems employing the best performing NASNet- A image featurization (NASNet-A, 6 @ 4032) as well as the image featurization geared towards mobile platforms (NASNet-A, 4 @ 1056). | 1707.07012#37 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 38 | output and missing information (the former is partially cap- tured by GBMs). Examples 2 and 3 also illus- trate inconsistencies in human ratings since sys- tem output 2 is clearly worse than output 3 and both are rated by human with a median score of 1. Example 4 shows an output of the RNNLG system which is semantically very similar to the reference (SIM=4) and rated high by humans, but WOMs fail to capture this similarity. GBMs show more accu- rate results for this utterance, with mean of read- ability scores 4 and parsing score 3.5. | 1707.06875#38 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.07012 | 38 | For the mobile-optimized network, our resulting system achieves a mAP of 29.6% â exceeding previous mobile- optimized networks that employ Faster-RCNN by over 5.0% (Table 4). For the best NASNet network, our resulting
network operating on images of the same spatial resolution (800 Ã 800) achieves mAP = 40.7%, exceeding equivalent object detection systems based off lesser performing image featurization (i.e. Inception-ResNet-v2) by 4.0% [28, 52] (see Appendix for example detections on images and side- by-side comparisons). Finally, increasing the spatial reso- lution of the input image results in the best reported, single model result for object detection of 43.1%, surpassing the best previous best by over 4.0% [37].2 These results provide further evidence that NASNet provides superior, generic image features that may be transferred across other com- puter vision tasks. Figure 10 and Figure 11 in Appendix C show four examples of object detection results produced by NASNet-A with the Faster-RCNN framework.
# 4.4. Efï¬ciency of architecture search methods | 1707.07012#38 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.07012 | 39 | # 4.4. Efï¬ciency of architecture search methods
0.930 0.925 â" RL Top 1 Unique Models RL Top 5 Unique Models RL Top 25 Unique Models â RS Top 1 Unique Models = = RS Top 5 Unique Models = ++ RS Top 25 Unique Models 0 10000 20000 30000 40000 50000 Number of Models Sampled Accuracy at 20 Epochs go ge ee 2 2g o o oO oO o 6 § § & § =} a [=} a i=}
Figure 6. Comparing the efï¬ciency of random search (RS) to re- inforcement learning (RL) for learning neural architectures. The x-axis measures the total number of model architectures sampled, and the y-axis is the validation performance on CIFAR-10 after 20 epochs of training.
Though what search method to use is not the focus of the paper, an open question is how effective is the rein- forcement learning search method. In this section, we study the effectiveness of reinforcement learning for architecture search on the CIFAR-10 image classiï¬cation problem and compare it to brute-force random search (considered to be a very strong baseline for black-box optimization [5]) given an equivalent amount of computational resources. | 1707.07012#39 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 40 | rics. These studies mainly considered WBMs, while we are the ï¬rst study to consider GBMs. Some studies ask users to provide separate ratings for surface realisation (e.g. asking about âclarityâ or âï¬uencyâ), whereas other studies focus only on sentence planning (e.g. âaccuracyâ, âadequacyâ, or âcorrectnessâ). In general, correlations reported by previous work range from weak to strong. The re- sults conï¬rm that metrics can be reliable indica- tors at system-level (Reiter and Belz, 2009), while they perform less reliably at sentence-level (Stent et al., 2005). Also, the results show that the met- rics capture realization better than sentence plan- ning. There is a general trend showing that best- performing metrics tend to be the more complex ones, combining word-overlap, semantic similar- ity and term frequency weighting. Note, however, that the majority of previous works do not report whether any of the metric correlations are signiï¬- cantly different from each other.
# 10 Conclusions | 1707.06875#40 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.07012 | 40 | Figure 6 shows the performance of reinforcement learn- ing (RL) and random search (RS) as more model architec2A primary advance in the best reported object detection system is the introduction of a novel loss [37]. Pairing this loss with NASNet-A image featurization may lead to even further performance gains. Additionally, performance gains are achievable through ensembling multiple inferences across multiple model instances and image crops (e.g., [28]).
tures are sampled. Note that the best model identiï¬ed with RL is signiï¬cantly better than the best model found by RS by over 1% as measured by on CIFAR-10. Additionally, RL ï¬nds an entire range of models that are of superior quality to random search. We observe this in the mean performance of the top-5 and top-25 models identiï¬ed in RL versus RS. We take these results to indicate that although RS may pro- vide a viable search strategy, RL ï¬nds better architectures in the NASNet search space.
# 5. Conclusion | 1707.07012#40 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 41 | # 10 Conclusions
This paper shows that state-of-the-art automatic evaluation metrics for NLG systems do not suf- ï¬ciently reï¬ect human ratings, which stresses the need for human evaluations. This result is opposed to the current trend of relying on automatic evalua- tion identiï¬ed in (Gkatzia and Mahamood, 2015). A detailed error analysis suggests that auto- matic metrics are particularly weak in distinguish- ing outputs of medium and good quality, which can be partially attributed to the fact that hu- man judgements and metrics are given on differ- ent scales. We also show that metric performance is data- and system-speciï¬c.
Nevertheless, our results also suggest that auto- matic metrics can be useful for error analysis by helping to ï¬nd cases where the system is perform- ing poorly. In addition, we ï¬nd reliable results on
system-level, which suggests that metrics can be useful for system development.
# 11 Future Directions | 1707.06875#41 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.07012 | 41 | # 5. Conclusion
In this work, we demonstrate how to learn scalable, con- volutional cells from data that transfer to multiple image classiï¬cation tasks. The learned architecture is quite ï¬ex- ible as it may be scaled in terms of computational cost and parameters to easily address a variety of problems. In all cases, the accuracy of the resulting model exceeds all human-designed models â ranging from models designed for mobile applications to computationally-heavy models designed to achieve the most accurate results.
The key insight in our approach is to design a search space that decouples the complexity of an architecture from the depth of a network. This resulting search space per- mits identifying good architectures on a small dataset (i.e., CIFAR-10) and transferring the learned architecture to im- age classiï¬cations across a range of data and computational scales. | 1707.07012#41 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 42 | system-level, which suggests that metrics can be useful for system development.
# 11 Future Directions
Word-based metrics make two strong assump- tions: They treat human-generated references as a gold standard, which is correct and complete. We argue that these assumptions are invalid for corpus-based NLG, especially when using crowd- sourced datasets. Grammar-based metrics, on the other hand, do not rely on human-generated ref- erences and are not inï¬uenced by their quality. However, these metrics can be easily manipulated with grammatically correct and easily readable output that is unrelated to the input. We have experimented with combining WBMs and GBMs using ensemble-based learning. However, while our model achieved high correlation with humans within a single domain, its cross-domain perfor- mance is insufï¬cient. | 1707.06875#42 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.07012 | 42 | The resulting architectures approach or exceed state- of-the-art performance in both CIFAR-10 and ImageNet datasets with less computational demand than human- designed architectures [60, 29, 69]. The ImageNet re- sults are particularly important because many state-of-the- art computer vision problems (e.g., object detection [28], face detection [50], image localization [63]) derive im- age features or architectures from ImageNet classiï¬cation models. For instance, we ï¬nd that image features ob- tained from ImageNet used in combination with the Faster- RCNN framework achieves state-of-the-art object detection results. Finally, we demonstrate that we can use the re- sulting learned architecture to perform ImageNet classiï¬- cation with reduced computational budgets that outperform streamlined architectures targeted to mobile and embedded platforms [24, 70].
# References
[1] M. Andrychowicz, M. Denil, S. Gomez, M. W. Hoffman, D. Pfau, T. Schaul, and N. de Freitas. Learning to learn by gradient descent by gradient descent. In Advances in Neural Information Processing Systems, pages 3981â3989, 2016. | 1707.07012#42 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 43 | Our paper clearly demonstrates the need for more advanced metrics, as used in related ï¬elds, including: assessing output quality within the di- alogue context, e.g. (DuËsek and JurËc´ıËcek, 2016); extrinsic evaluation metrics, such as NLGâs con- tribution to task success, e.g. (Rieser et al., 2014; Gkatzia et al., 2016; Hastie et al., 2016); building discriminative models, e.g. (Hodosh and Hock- enmaier, 2016), (Kannan and Vinyals, 2017); or reference-less quality prediction as used in MT, e.g. (Specia et al., 2010). We see our paper as a ï¬rst step towards reference-less evaluation for NLG by introducing grammar-based metrics. In current work (DuËsek et al., 2017), we investigate a reference-less quality estimation approach based on recurrent neural networks, which predicts a quality score for a NLG system output by compar- ing it to the source meaning representation only. | 1707.06875#43 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.07012 | 43 | [2] J. L. Ba, J. R. Kiros, and G. E. Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
[3] B. Baker, O. Gupta, N. Naik, and R. Raskar. Designing neu- ral network architectures using reinforcement learning. In In- ternational Conference on Learning Representations, 2016. [4] J. Bergstra, R. Bardenet, Y. Bengio, and B. K´egl. Algo- In Neural Inforrithms for hyper-parameter optimization. mation Processing Systems, 2011.
[5] J. Bergstra and Y. Bengio. Random search for hyper- parameter optimization. Journal of Machine Learning Re- search, 2012.
[6] J. Bergstra, D. Yamins, and D. D. Cox. Making a science of model search: Hyperparameter optimization in hundreds of dimensions for vision architectures. International Confer- ence on Machine Learning, 2013.
[7] J. Chen, R. Monga, S. Bengio, and R. Jozefowicz. Revisiting distributed synchronous sgd. In International Conference on Learning Representations Workshop Track, 2016. | 1707.07012#43 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 44 | Finally, note that the datasets considered in this study are fairly small (between 404 and 2.3k hu- man references per domain). To remedy this, sys- tems train on de-lexicalised versions (Wen et al., 2015), which bears the danger of ungrammatical lexicalisation (Sharma et al., 2016) and a possi- ble overlap between testing and training set (Lam- pouras and Vlachos, 2016). There are ongoing ef- forts to release larger and more diverse data sets, e.g. (Novikova et al., 2016, 2017).
# Acknowledgements
This research received funding from the EPSRC projects DILiGENt (EP/M005429/1) and MaDrI- gAL (EP/N017536/1). The Titan Xp used for this research was donated by the NVIDIA Corpora- tion.
# References
Anja Belz and Eric Kow. 2011. Discrete vs. con- tinuous rating scales for language evaluation in In Proceedings of the 49th Annual Meet- NLP. ing of the Association for Computational Linguis- tics: Human Language Technologies: Short Pa- pers â Volume 2. Association for Computational Linguistics, Portland, OR, USA, pages 230â235. http://aclweb.org/anthology/P11-2040. | 1707.06875#44 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.07012 | 44 | [8] Y. Chen, J. Li, H. Xiao, X. Jin, S. Yan, and J. Feng. Dual path networks. arXiv preprint arXiv:1707.01083, 2017. [9] F. Chollet. Xception: Deep learning with depthwise separa- ble convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.
[10] D.-A. Clevert, T. Unterthiner, and S. Hochreiter. Fast and accurate deep network learning by exponential linear units (elus). In International Conference on Learning Representa- tions, 2016.
[11] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei- Fei. Imagenet: A large-scale hierarchical image database. In IEEE Conference on Computer Vision and Pattern Recogni- tion. IEEE, 2009.
Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:1708.04552, 2017. | 1707.07012#44 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 45 | Anja Belz and Ehud Reiter. 2006. Comparing au- tomatic and human evaluation of NLG systems. In Proceedings of the 11th Conference of the Eu- ropean Chapter of the Association for Computa- tional Linguistics. Trento, Italy, pages 313â320. http://aclweb.org/anthology/E06-1040.
Correlating human and au- tomatic evaluation of a German surface realiser. In Proceedings of the ACL-IJCNLP 2009 Con- ference Short Papers. Association for Computa- tional Linguistics, Suntec, Singapore, pages 97â100. https://aclweb.org/anthology/P09-2025.
Chris Callison-Burch, Miles Osborne, and Philipp Koehn. 2006. Re-evaluating the role of BLEU In Proceed- in machine translation research. the European the 11th Conference of ings of the Association for Computational Chapter of Linguistics. Trento, 249â256. pages http://aclweb.org/anthology/E06-1032. | 1707.06875#45 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.07012 | 45 | Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:1708.04552, 2017.
[13] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. Decaf: A deep convolutional ac- In Interna- tivation feature for generic visual recognition. tional Conference on Machine Learning, volume 32, pages 647â655, 2014.
[14] Y. Duan, J. Schulman, X. Chen, P. L. Bartlett, I. Sutskever, and P. Abbeel. RL2: Fast reinforcement learning via slow reinforcement learning. arXiv preprint arXiv:1611.02779, 2016.
[15] C. Finn, P. Abbeel, and S. Levine. Model-agnostic meta- In Interna- learning for fast adaptation of deep networks. tional Conference on Machine Learning, 2017.
[16] D. Floreano, P. D¨urr, and C. Mattiussi. Neuroevolution: from architectures to learning. Evolutionary Intelligence, 2008. | 1707.07012#45 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 46 | Automatic evaluation of machine translation quality using n-gram co- occurrence statistics. In Proceedings of the Second International Conference on Human Language Technology Research. Morgan Kaufmann Publish- ers Inc., San Francisco, CA, USA, pages 138â145. http://dl.acm.org/citation.cfm?id=1289189.1289273.
Ondrej DuËsek, Jekaterina Novikova, and Verena Rieser. 2017. Referenceless quality estimation for natu- ral language generation. In Proceedings of the 1st Workshop on Learning to Generate Natural Lan- guage.
Train- ing a natural language generator from unaligned In Proceedings of the 53rd Annual Meet- data. ing of the Association for Computational Lin- guistics and the 7th International Joint Confer- ence on Natural Language Processing (Volume 1: Long Papers). Beijing, China, pages 451â461. http://aclweb.org/anthology/P15-1044. | 1707.06875#46 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.07012 | 46 | [16] D. Floreano, P. D¨urr, and C. Mattiussi. Neuroevolution: from architectures to learning. Evolutionary Intelligence, 2008.
[17] K. Fukushima. A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in po- sition. Biological Cybernetics, page 93202, 1980.
[18] X. Gastaldi. Shake-shake regularization of 3-branch residual networks. In International Conference on Learning Repre- sentations Workshop Track, 2017.
[19] D. Ha, A. Dai, and Q. V. Le. Hypernetworks. In Interna- tional Conference on Learning Representations, 2017. [20] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning In IEEE Conference on Computer
for image recognition. Vision and Pattern Recognition, 2016.
[21] K. He, X. Zhang, S. Ren, and J. Sun. Identity mappings in deep residual networks. In European Conference on Com- puter Vision, 2016.
[22] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 1997. | 1707.07012#46 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 47 | OndËrej DuËsek and Filip JurËc´ıËcek. 2016. A context- for dialogue aware natural the 17th An- systems. nual Meeting of Interest Group on Discourse and Dialogue. Association for Computational Linguistics, Los Angeles, CA, arXiv:1608.07076. USA, http://aclweb.org/anthology/W16-3622.
OndËrej DuËsek and Filip JurËc´ıËcek. 2016. Sequence- to-sequence generation for spoken dialogue via In Proceed- deep syntax trees and strings. ings of the As- the 54th Annual Meeting of sociation for Computational Linguistics. Berlin, Germany, arXiv:1606.05491. pages 45â51. http://aclweb.org/anthology/P16-2008.
Desmond Elliott and Frank Keller. 2014. Comparing automatic evaluation measures for image descrip- tion. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Vol- ume 2: Short Papers). Association for Computa- tional Linguistics, Baltimore, MD, USA, pages 452â 457. http://aclweb.org/anthology/P14-2074. | 1707.06875#47 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.07012 | 47 | [22] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 1997.
[23] S. Hochreiter, A. Younger, and P. Conwell. Learning to learn using gradient descent. Artiï¬cial Neural Networks, pages 87â94, 2001.
[24] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam. Mobilenets: Efï¬- cient convolutional neural networks for mobile vision appli- cations. arXiv preprint arXiv:1704.04861, 2017.
[25] J. Hu, L. Shen, and G. Sun. Squeeze-and-excitation net- works. arXiv preprint arXiv:1709.01507, 2017.
[26] G. Huang, Z. Liu, and K. Q. Weinberger. Densely connected convolutional networks. In IEEE Conference on Computer Vision and Pattern Recognition, 2017.
[27] G. Huang, Y. Sun, Z. Liu, D. Sedra, and K. Weinberger. Deep networks with stochastic depth. In European Conference on Computer Vision, 2016. | 1707.07012#47 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 48 | Dominic Espinosa, Rajakrishnan Rajkumar, Michael White, and Shoshana Berleant. 2010. Further meta- evaluation of broad-coverage surface realization. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. Associa- tion for Computational Linguistics, pages 564â574. http://aclweb.org/anthology/D10-1055.
Rudolf Franz Flesch. 1979. How to write plain En- glish: A book for lawyers and consumers. Harper- Collins.
Thomas Francois and Delphine Bernhard, editors. 2014. Recent Advances in Automatic Readability Assessment and Text Simpliï¬cation, volume 165:2 of International Journal of Applied Linguistics. John Benjamins. http://doi.org/10.1075/itl.165.2. | 1707.06875#48 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.07012 | 48 | [28] J. Huang, V. Rathod, C. Sun, M. Zhu, A. Korattikara, A. Fathi, I. Fischer, Z. Wojna, Y. Song, S. Guadarrama, et al. Speed/accuracy trade-offs for modern convolutional object detectors. In IEEE Conference on Computer Vision and Pat- tern Recognition, 2017.
[29] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Learning Representations, 2015.
[30] R. Jozefowicz, W. Zaremba, and I. Sutskever. An empirical In Internaexploration of recurrent network architectures. tional Conference on Learning Representations, 2015. [31] A. Krizhevsky. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009. Imagenet In
[32] A. Krizhevsky, I. Sutskever, and G. E. Hinton. classiï¬cation with deep convolutional neural networks. Advances in Neural Information Processing System, 2012. | 1707.07012#48 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 49 | Michel Galley, Chris Brockett, Alessandro Sordoni, Yangfeng Ji, Michael Auli, Chris Quirk, Mar- garet Mitchell, Jianfeng Gao, and Bill Dolan. 2015. deltaBLEU: A discriminative metric for generation tasks with intrinsically diverse targets. In Proceed- ings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th Interna- tional Joint Conference on Natural Language Pro- cessing (Volume 2: Short Papers). Association for Computational Linguistics, Beijing, China, pages 445â450. http://aclweb.org/anthology/P15-2073.
Jes´us Gim´enez and Llu´ıs M`arquez. 2008. A smor- gasbord of features for automatic MT evaluation. In Proceedings of the Third Workshop on Statisti- cal Machine Translation. Association for Computa- tional Linguistics, Columbus, OH, USA, pages 195â 198. http://aclweb.org/anthology/W08-0332.
Dimitra Gkatzia, Oliver Lemon, and Verena Rieser. 2016. language generation enhances human decision-making with uncertain informa- the 54th Annual tion. | 1707.06875#49 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.07012 | 49 | [33] G. Larsson, M. Maire, and G. Shakhnarovich. Fractalnet: Ultra-deep neural networks without residuals. arXiv preprint arXiv:1605.07648, 2016.
[34] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient- based learning applied to document recognition. Proceed- ings of the IEEE, 1998.
[35] K. Li and J. Malik. Learning to optimize neural nets. arXiv preprint arXiv:1703.00441, 2017.
[36] T.-Y. Lin, P. Doll´ar, R. Girshick, K. He, B. Hariharan, and S. Belongie. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.
[37] T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Doll´ar. arXiv preprint Focal arXiv:1708.02002, 2017. loss for dense object detection. | 1707.07012#49 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 50 | Dimitra Gkatzia, Oliver Lemon, and Verena Rieser. 2016. language generation enhances human decision-making with uncertain informa- the 54th Annual tion.
Meeting of Linguistics (Volume 2: Germany, pages 264â268. http://aclweb.org/anthology/P16-2043. the Association for Computational Short Papers). Berlin, arXiv:1606.03254.
Dimitra Gkatzia and Saad Mahamood. 2015. A snap- In shot of NLG evaluation practices 2005â2014. Proceedings of the 15th European Workshop on Natural Language Generation (ENLG). Association for Computational Linguistics, Brighton, UK, pages 57â60. https://doi.org/10.18653/v1/W15-4708.
Aaron L. F. Han, Derek F. Wong, and Lidia S. Chao. 2012. LEPOR: A robust evaluation metric for ma- In Pro- chine translation with augmented factors. ceedings of COLING 2012: Posters. The COLING 2012 Organizing Committee, Mumbai, India, pages 441â450. http://aclweb.org/anthology/C12-2044. | 1707.06875#50 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.07012 | 50 | [38] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ra- manan, P. Doll´ar, and C. L. Zitnick. Microsoft coco: Com- mon objects in context. In European Conference on Com- puter Vision, pages 740â755. Springer, 2014.
[39] I. Loshchilov and F. Hutter. SGDR: Stochastic gradient de- In International Conference on scent with warm restarts. Learning Representations, 2017.
[40] H. Mendoza, A. Klein, M. Feurer, J. T. Springenberg, and F. Hutter. Towards automatically-tuned neural networks. In Proceedings of the 2016 Workshop on Automatic Machine Learning, pages 58â65, 2016.
[41] T. Miconi. Neural networks with differentiable structure. arXiv preprint arXiv:1606.06216, 2016.
[42] R. Miikkulainen, J. Liang, E. Meyerson, A. Rawal, D. Fink, O. Francon, B. Raju, A. Navruzyan, N. Duffy, and B. Hod- arXiv preprint jat. arXiv:1703.00548, 2017. | 1707.07012#50 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 51 | Tim Finin, and Jonathan Weese. 2013. James Mayï¬eld, UMBC EBIQUITY-CORE: Semantic textual simi- larity systems. In Proceedings of the Second Joint Conference on Lexical and Computational Seman- tics (*SEM). Atlanta, Georgia, volume 1, pages 44â52. http://aclweb.org/anthology/S13-1005.
Helen Hastie, Heriberto Cuayahuitl, Nina Dethlefs, Si- mon Keizer, and Xingkun Liu. 2016. Why bother? Is evaluation of NLG in an end-to-end Spoken Di- alogue System worth it? In Proceedings of the In- ternational Workshop on Spoken Dialogue Systems (IWSDS). Saariselk¨a, Finland.
Micah Hodosh and Julia Hockenmaier. 2016. Focused evaluation for image description with binary forced- choice tasks. In Proceedings of the 5th Workshop on Vision and Language. Berlin, Germany, pages 19â 28. http://aclweb.org/anthology/W16-3203. | 1707.06875#51 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.07012 | 51 | [43] R. Negrinho and G. Gordon. DeepArchitect: Automatically designing and training deep architectures. arXiv preprint arXiv:1704.08792, 2017.
[44] N. Pinto, D. Doukhan, J. J. DiCarlo, and D. D. Cox. A high- throughput screening approach to discovering good forms of biologically inspired visual representation. PLoS Computa- tional Biology, 5(11):e1000579, 2009.
[45] S. Ravi and H. Larochelle. Optimization as a model for few- shot learning. In International Conference on Learning Rep- resentations, 2017.
[46] E. Real, S. Moore, A. Selle, S. Saxena, Y. L. Suematsu, Q. Le, and A. Kurakin. Large-scale evolution of image clas- In International Conference on Machine Learning, siï¬ers. 2017.
[47] S. Ren, K. He, R. Girshick, and J. Sun. Faster R-CNN: To- wards real-time object detection with region proposal net- works. In Advances in Neural Information Processing Sys- tems, pages 91â99, 2015. | 1707.07012#51 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 52 | Dirk Hovy, Taylor Berg-Kirkpatrick, Ashish Vaswani, and Eduard H. Hovy. 2013. Learning whom to In Proceedings of NAACL- trust with MACE. HLT. Atlanta, GA, USA, pages 1120â1130. http://aclweb.org/anthology/N13-1132.
Min-Yen Kan, Kathleen R. McKeown, and Judith L. Klavans. 2001. Applying natural language gen- In Proceed- eration to indicative summarization. ings of the 8th European Workshop on Natural Language Generation. Association for Computa- tional Linguistics, Toulouse, France, pages 1â9. https://doi.org/10.3115/1117840.1117853.
Anjuli Kannan and Oriol Vinyals. 2017. Adver- CoRR sarial evaluation of dialogue models. abs/1701.08198. https://arxiv.org/abs/1701.08198. | 1707.06875#52 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.07012 | 52 | [48] S. Saxena and J. Verbeek. Convolutional neural fabrics. In Advances in Neural Information Processing Systems, 2016. [49] T. Schaul and J. Schmidhuber. Metalearning. Scholarpedia,
2010.
[50] F. Schroff, D. Kalenichenko, and J. Philbin. Facenet: A uni- ï¬ed embedding for face recognition and clustering. In Pro- ceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 815â823, 2015.
[51] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
[52] A. Shrivastava, R. Sukthankar, J. Malik, and A. Gupta. Be- yond skip connections: Top-down modulation for object de- tection. arXiv preprint arXiv:1612.06851, 2016.
[53] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations, 2015. | 1707.07012#52 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 53 | Mert Kilickaya, Aykut Erdem, Nazli Ikizler-Cinbis, Re-evaluating auto- and Erkut Erdem. 2017. In Pro- matic metrics for image captioning. the Euro- ceedings of the 15th Conference of the Association for Computa- pean Chapter of tional Linguistics. Association for Computational Linguistics, Valencia, Spain. arXiv:1612.07600. https://arxiv.org/abs/1612.07600.
Gerasimos Lampouras and Andreas Vlachos. 2016. Imitation learning for language generation from un- In Proceedings of COLING 2016, aligned data. the 26th International Conference on Computational Linguistics: Technical Papers. The COLING 2016 Organizing Committee, Osaka, Japan, pages 1101â 1112. http://aclweb.org/anthology/C16-1105.
J Richard Landis and Gary G Koch. 1977. measurement of observer agreement Biometrics egorical https://doi.org/10.2307/2529310. | 1707.06875#53 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.07012 | 53 | [53] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations, 2015.
[54] J. Snoek, H. Larochelle, and R. P. Adams. Practical Bayesian optimization of machine learning algorithms. In Neural In- formation Processing Systems, 2012.
[55] J. Snoek, O. Rippel, K. Swersky, R. Kiros, N. Satish, N. Sun- daram, M. Patwary, M. Ali, R. P. Adams, et al. Scalable Bayesian optimization using deep neural networks. In Inter- national Conference on Machine Learning, 2015.
[56] N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: a simple way to prevent neural networks from overï¬tting. Journal of Machine Learning Research, 15(1):1929â1958, 2014.
A hypercube-based encoding for evolving large-scale neural networks. Artiï¬cial Life, 2009. | 1707.07012#53 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 54 | J Richard Landis and Gary G Koch. 1977. measurement of observer agreement Biometrics egorical https://doi.org/10.2307/2529310.
Alon Lavie and Abhaya Agarwal. 2007. METEOR: An automatic metric for MT evaluation with high levels of correlation with human judgments. In Pro- ceedings of the Second Workshop on Statistical Ma- chine Translation. Association for Computational Linguistics, Prague, Czech Republic, pages 228â 231. http://aclweb.org/anthology/W07-0734.
Chin-Yew Lin. 2004. ROUGE: A package for au- In Text summa- tomatic evaluation of summaries. rization branches out: Proceedings of the ACL- 04 workshop. Barcelona, Spain, pages 74â81. http://aclweb.org/anthology/W04-1013. | 1707.06875#54 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.07012 | 54 | A hypercube-based encoding for evolving large-scale neural networks. Artiï¬cial Life, 2009.
[58] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. Alemi. Inception- v4, Inception-Resnet and the impact of residual connections on learning. In International Conference on Learning Rep- resentations Workshop Track, 2016.
[59] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. In IEEE Conference on Going deeper with convolutions. Computer Vision and Pattern Recognition, 2015.
[60] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the Inception architecture for computer vision. In IEEE Conference on Computer Vision and Pattern Recogni- tion, 2016.
[61] D. Ulyanov, A. Vedaldi, and V. Lempitsky. Instance normal- ization: The missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022, 2016. | 1707.07012#54 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 55 | Chia-Wei Liu, Ryan Lowe, Iulian Serban, Michael Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How NOT to evaluate your dialogue sys- tem: An empirical study of unsupervised evalua- tion metrics for dialogue response generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Austin, arXiv:1603.08023. TX, USA, pages 2122â2132. http://aclweb.org/anthology/D16-1230.
Franc¸ois Mairesse, Milica GaËsi´c, Filip JurËc´ıËcek, Simon Keizer, Blaise Thomson, Kai Yu, and Steve Young. 2010. Phrase-based statistical language generation using graphical models and active learning. In Pro- ceedings of the 48th Annual Meeting of the Associa- tion for Computational Linguistics. Association for Computational Linguistics, Uppsala, Sweden, pages 1552â1561. http://aclweb.org/anthology/P10-1157. | 1707.06875#55 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.07012 | 55 | [62] J. X. Wang, Z. Kurth-Nelson, D. Tirumala, H. Soyer, J. Z. Leibo, R. Munos, C. Blundell, D. Kumaran, and arXiv M. Botvinick. Learning to reinforcement learn. preprint arXiv:1611.05763, 2016.
[63] T. Weyand, I. Kostrikov, and J. Philbin. Planet-photo ge- olocation with convolutional neural networks. In European Conference on Computer Vision, 2016.
[64] O. Wichrowska, N. Maheswaranathan, M. W. Hoffman, S. G. Colmenarejo, M. Denil, N. de Freitas, and J. Sohl-Dickstein. Learned optimizers that scale and generalize. arXiv preprint arXiv:1703.04813, 2017.
[65] D. Wierstra, F. J. Gomez, and J. Schmidhuber. Modeling In The Genetic systems with internal state using evolino. and Evolutionary Computation Conference, 2005.
[66] R. J. Williams. Simple statistical gradient-following algo- rithms for connectionist reinforcement learning. In Machine Learning, 1992. | 1707.07012#55 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 56 | Hongyuan Mei, Mohit Bansal, and Matthew R. Wal- ter. 2016. What to talk about and how? Se- lective generation using LSTMs with coarse-to- In Proceedings of NAACL-HLT ï¬ne alignment. 2016. San Diego, CA, USA. arXiv:1509.00838. http://aclweb.org/anthology/N16-1086.
Courtney Napoles, Keisuke Sakaguchi, and Joel comparison: Tetreault. in gram- Reference-less In Proceedings of matical error correction. the 2016 Conference on Empirical Methods in Natural Language Processing. Austin, TX, USA, pages 2109â2115. arXiv:1610.02124. http://aclweb.org/anthology/D16-1228.
Jekaterina Novikova, Ondrej DuËsek, and Verena The E2E dataset: New chal- In Pro- the
Special Interest Group on Discourse and Dia- logue. Saarbr¨ucken, Germany. ArXiv:1706.09254. https://arxiv.org/abs/1706.09254. | 1707.06875#56 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.07012 | 56 | [66] R. J. Williams. Simple statistical gradient-following algo- rithms for connectionist reinforcement learning. In Machine Learning, 1992.
[67] L. Xie and A. Yuille. Genetic CNN. arXiv preprint arXiv:1703.01513, 2017.
[68] S. Xie, R. Girshick, P. Doll´ar, Z. Tu, and K. He. Aggregated residual transformations for deep neural networks. In Pro- ceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.
[69] X. Zhang, Z. Li, C. C. Loy, and D. Lin. Polynet: A pursuit In Proceed- of structural diversity in very deep networks. ings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.
[70] X. Zhang, X. Zhou, L. Mengxiao, and J. Sun. Shufï¬enet: An extremely efï¬cient convolutional neural network for mobile devices. arXiv preprint arXiv:1707.01083, 2017.
[71] B. Zoph and Q. V. Le. Neural architecture search with rein- forcement learning. In International Conference on Learning Representations, 2017.
# Appendix
# A. Experimental Details
# A.1. Dataset for Architecture Search | 1707.07012#56 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 57 | Jekaterina Novikova, Oliver Lemon, and Verena Rieser. 2016. Crowd-sourcing NLG data: Pictures In Proceedings of the 9th Inter- elicit better data. national Natural Language Generation Conference. Edinburgh, UK, pages 265â273. arXiv:1608.00339. http://aclweb.org/anthology/W16-2302.
Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: a method for automatic In Proceedings evaluation of machine translation. of the 40th Annual Meeting of the Association for Computational Linguistics. Association for Compu- tational Linguistics, Philadelphia, PA, USA, pages 311â318. http://aclweb.org/anthology/P02-1040.
Ehud Reiter and Anja Belz. 2009. An investiga- tion into the validity of some metrics for automat- ically evaluating natural language generation sys- tems. Computational Linguistics 35(4):529â558. https://doi.org/10.1162/coli.2009.35.4.35405. | 1707.06875#57 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.07012 | 57 | # Appendix
# A. Experimental Details
# A.1. Dataset for Architecture Search
The CIFAR-10 dataset [31] consists of 60,000 32x32 RGB images across 10 classes (50,000 train and 10,000 test images). We partition a random subset of 5,000 images from the training set to use as a validation set for the con- troller RNN. All images are whitened and then undergone several data augmentation steps: we randomly crop 32x32 patches from upsampled images of size 40x40 and apply random horizontal ï¬ips. This data augmentation procedure is common among related work.
# A.2. Controller architecture
The controller RNN is a one-layer LSTM [22] with 100 hidden units at each layer and 2 Ã 5B softmax predictions for the two convolutional cells (where B is typically 5) as- sociated with each architecture decision. Each of the 10B predictions of the controller RNN is associated with a prob- ability. The joint probability of a child network is the prod- uct of all probabilities at these 10B softmaxes. This joint probability is used to compute the gradient for the controller RNN. The gradient is scaled by the validation accuracy of the child network to update the controller RNN such that the controller assigns low probabilities for bad child networks and high probabilities for good child networks. | 1707.07012#57 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 58 | Verena Rieser, Oliver Lemon, and Simon Keizer. language generation as incre- 2014. mental planning under uncertainty: Adaptive information presentation for statistical dialogue IEEE/ACM Transactions on Audio, systems. Speech, and Language Processing 22(5):979â993. https://doi.org/10.1109/TASL.2014.2315271.
Shikhar Sharma, Jing He, Kaheer Suleman, Hannes Schulz, and Philip Bachman. 2016. Natural lan- guage generation in dialogue using lexicalized CoRR abs/1606.03632. and delexicalized data. http://arxiv.org/abs/1606.03632.
Matthew Snover, Bonnie Dorr, Richard Schwartz, Lin- nea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annota- tion. In Proceedings of the 7th Conference of the As- sociation for Machine Translation of the Americas. Cambridge, MA, USA, pages 223â231. http://mt- archive.info/AMTA-2006-Snover.pdf.
and Marco Turchi. 2010. Machine translation evaluation versus qual- ity estimation. Machine translation 24(1):39â50. https://doi.org/10.1007/s10590-010-9077-2. | 1707.06875#58 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.07012 | 58 | Unlike [71], who used the REINFORCE rule [66] to up- date the controller, we employ Proximal Policy Optimiza- tion (PPO) [51] with learning rate 0.00035 because train- ing with PPO is faster and more stable. To encourage ex- ploration we also use an entropy penalty with a weight of In our implementation, the baseline function is 0.00001. an exponential moving average of previous rewards with a weight of 0.95. The weights of the controller are initialized uniformly between -0.1 and 0.1.
# A.3. Training of the Controller
For distributed training, we use a workqueue system where all the samples generated from the controller RNN are added to a global workqueue. A free âchildâ worker in a distributed worker pool asks the controller for new work from the global workqueue. Once the training of the child network is complete, the accuracy on a held-out valida- tion set is computed and reported to the controller RNN. In our experiments we use a child worker pool size of 450, which means there are 450 networks being trained on 450 GPUs concurrently at any time. Upon receiving enough child model training results, the controller RNN will per- form a gradient update on its weights using PPO and then sample another batch of architectures that go into the global workqueue. This process continues until a predetermined | 1707.07012#58 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 59 | Amanda Stent, Matthew Marge, and Mohit Singhai. 2005. Evaluating evaluation methods for gener- In Computa- ation in the presence of variation. tional Linguistics and Intelligent Text Processing: 6th International Conference, CICLing 2005, Mex- ico City, Mexico, February 13-19, 2005. Proceed- ings. Springer, Berlin/Heidelberg, pages 341â351. https://doi.org/10.1007/978-3-540-30586-6 38.
Ramakrishna Vedantam, C. Lawrence Zitnick, CIDEr: Consensus- In Pro- the 2015 IEEE Conference on
Computer Recognition (CVPR). Boston, MA, USA, pages 4566â4575. https://doi.org/10.1109/CVPR.2015.7299087. | 1707.06875#59 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.07012 | 59 | number of architectures have been sampled. In our experi- ments, this predetermined number of architectures is 20,000 which means the search process is terminated after 20,000 child models have been trained. Additionally, we update the controller RNN with minibatches of 20 architectures. Once the search is over, the top 250 architectures are then chosen to train until convergence on CIFAR-10 to determine the very best architecture.
# A.4. Details of architecture search space
We performed preliminary experiments to identify a ï¬ex- ible, expressive search space for neural architectures that learn effectively. Generally, our strategy for preliminary ex- periments involved small-scale explorations to identify how to run large-scale architecture search.
⢠All convolutions employ ReLU nonlinearity. Exper- iments with ELU nonlinearity [10] showed minimal beneï¬t.
⢠To ensure that the shapes always match in convolu- tional cells, 1x1 convolutions are inserted as necessary.
⢠Unlike [24], all depthwise separable convolution do not employ Batch Normalization and/or a ReLU be- tween the depthwise and pointwise operations.
⢠All convolutions followed an ordering of ReLU, con- volution operation and Batch Normalization following [21]. | 1707.07012#59 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 60 | Tsung-Hsien Wen, Milica GaËsi´c, Nikola MrkËsi´c, Lina Maria Rojas-Barahona, Pei-hao Su, David Vandyke, and Steve J. Young. 2016. Multi- domain neural network language generation for In Proceedings of the spoken dialogue systems. 2016 Conference of the North American Chap- ter of the Association for Computational Linguis- tics: Human Language Technologies. San Diego, CA, USA, pages 120â129. arXiv:1603.01232. http://aclweb.org/anthology/N16-1015.
Tsung-Hsien Wen, Milica GaËsi´c, Nikola MrkËsi´c, Pei-Hao Su, David Vandyke, and Steve Young. 2015. Semantically conditioned LSTM-based natural language generation for spoken dialogue the 2015 Confer- systems. ence on Empirical Methods in Natural Language Processing. Lisbon, Portugal, pages 1711â1721. http://aclweb.org/anthology/D15-1199.
Evan James Williams. 1959. Regression analysis. John Wiley & Sons, New York, NY, USA.
# Appendix A: Detailed Results | 1707.06875#60 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.06875 | 61 | Evan James Williams. 1959. Regression analysis. John Wiley & Sons, New York, NY, USA.
# Appendix A: Detailed Results
BAGEL SFHOTEL SFREST Inf: 0.16* Nat: 0.36* Qua: 0.38* TGEN: 0.42* LOLS: 0.24* Inf: 0.41* Nat: 0.47* Qua: 0.52* RNNLG: 0.52* LOLS: 0.45* Inf: 0.35* Nat: 0.29* Qua: 0.35* RNNLG: 0.28* LOLS: 0.38* Total BAGEL: 0.31* Total SFHOTEL: 0.50* Total all data: 0.45* Total SFREST: 0.35*
Table 7: Intra-class correlation coefï¬cient (ICC) for human ratings across the three datasets. â*â denotes statistical signiï¬cance (p < 0.05). | 1707.06875#61 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.07012 | 61 | # A.5. Training with ScheduledDropPath
We performed several experiments with various stochas- tic regularization methods. Naively applying dropout [56] across convolutional ï¬lters degraded performance. How- ever, we discovered a new technique called ScheduledDrop- Path, a modiï¬ed version of DropPath [33], that works well In DropPath, we stochastically in regularizing NASNets. drop out each path (i.e., edge with a yellow box in Figure 4) in the cell with some ï¬xed probability. This is simi- lar to [27] and [69] where they dropout full parts of their model during training and then at test time scale the path by the probability of keeping that path during training. In- terestingly we also found that DropPath alone does not help NASNet training much, but DropPath with linearly increas- ing the probability of dropping out a path over the course of training signiï¬cantly improves the ï¬nal performance for both CIFAR and ImageNet experiments. We name this method ScheduledDropPath.
hidden state set hidden state set hidden state set blocks | 1707.07012#61 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 62 | metric TER BLEU1 BLEU2 BLEU3 BLEU4 ROUGE NIST LEPOR CIDEr METEOR SIM RE msp prs len wps sps cpw spw pol ppw informativeness naturalness quality BAGEL TGEN Avg / StDev 0.36/0.24 0.75*/0.21 0.68/0.23 0.60/0.28 0.52/0.32 0.76/0.18 4.44*/2.05 0.46*/0.22 2.92/2.40 0.50/0.22 0.66/0.09 86.79/19.48 0.04*/0.21 84.51*/25.78 38.20*/14.22 10.08*/3.10 13.15*/4.98 3.77/0.60 1.30/0.22 2.22/1.21 0.22/0.09 4.77/1.09 4.76/1.26 4.77/1.19 LOLS Avg / StDev 0.33/0.24 0.81*/0.16 0.72/0.21 0.63/0.26 0.53/0.33 0.78/0.17 4.91*/2.04 0.50*/0.19 3.01/2.27 | 1707.06875#62 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.07012 | 62 | hidden state set hidden state set hidden state set blocks
Figure 7. Schematic diagram of the NASNet search space. Network motifs are constructed recursively in stages termed blocks. Each block consists of the controller selecting a pair of hidden states (dark gray), operations to perform on those hidden states (yellow) and a combination operation (green). The resulting hidden state is retained in the set of potential hidden states to be selected on subsequent blocks.
# A.6. Training of CIFAR models
All of our CIFAR models use a single period cosine de- cay as in [39, 18]. All models use the momentum optimizer with momentum rate set to 0.9. All models also use L2 weight decay. Each architecture is trained for a ï¬xed 20 epochs on CIFAR-10 during the architecture search process. Additionally, we found it beneï¬cial to use the cosine learn- ing rate decay during the 20 epochs the CIFAR models were trained as this helped to further differentiate good architec- tures. We also found that having the CIFAR models use a small N = 2 during the architecture search process allowed for models to train quite quickly, while still ï¬nding cells that work well once more were stacked.
# A.7. Training of ImageNet models | 1707.07012#62 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 63 | 0.53/0.33 0.78/0.17 4.91*/2.04 0.50*/0.19 3.01/2.27 0.53/0.23 0.65/0.12 83.39/20.41 0.14*/0.37 93.30*/27.04 42.54*/14.11 10.94*/3.19 14.61*/5.13 3.88/0.59 1.33/0.23 2.40/1.16 0.22/0.09 4.91/1.23 4.67/1.25 4.54/1.28 SFHOTEL LOLS Avg / StDev 0.65*/0.32 0.66*/0.23 0.54*/0.28 0.42*/0.33 0.28*/0.33 0.64*/0.21 3.49*/1.99 0.30*/0.16 1.66*/1.67 0.44*/0.20 0.73*/0.14 69.62/19.14 0.69/0.77 107.90*/36.41 51.69*/17.30 12.07*/4.17 17.02*/5.90 4.36/0.63 1.43/0.26 1.33/1.04 0.12/0.09 | 1707.06875#63 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.07012 | 63 | # A.7. Training of ImageNet models
We use ImageNet 2012 ILSVRC challenge data for large scale image classiï¬cation. The dataset consists of â¼ 1.2M images labeled across 1,000 classes [11]. Overall our train- ing and testing procedures are almost identical to [60]. Im- ageNet models are trained and evaluated on 299x299 or 331x331 images using the same data augmentation proce- dures as described previously [60]. We use distributed syn- chronous SGD to train the ImageNet model with 50 work- ers (and 3 backup workers) each with a Tesla K40 GPU [7]. We use RMSProp with a decay of 0.9 and epsilon of 1.0. Evaluations are calculated using with a running average of parameters over time with a decay rate of 0.9999. We use label smoothing with a value of 0.1 for all ImageNet mod- els as done in [60]. Additionally, all models use an auxiliary | 1707.07012#63 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 64 | 17.02*/5.90 4.36/0.63 1.43/0.26 1.33/1.04 0.12/0.09 5.27/1.02 4.62/1.28 4.53/1.26 RNNLG Avg / StDev 0.28*/0.27 0.85*/0.18 0.78*/0.25 0.69*/0.32 0.56*/0.40 0.83*/0.18 4.37*/2.19 0.52*/0.23 3.08*/2.05 0.62*/0.27 0.76*/0.15 70.90/17.07 0.68/0.78 97.58*/32.58 49.06*/15.77 11.43*/3.63 16.03*/4.88 4.34/0.58 1.43/0.23 1.24/1.04 0.11/0.10 5.47*/0.81 4.99*/1.13 4.54/1.18 SFREST RNNLG Avg / StDev 0.41*/0.35 0.73*/0.24 0.62*/0.31 0.52*/0.37 0.44*/0.41 0.72*/0.24 4.86*/2.55 0.51*/0.25 | 1707.06875#64 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.07012 | 64 | classiï¬er located at 2/3 of the way up the network. The loss of the auxiliary classiï¬er is weighted by 0.4 as done in [60]. We empirically found our network to be insensitive to the number of parameters associated with this auxiliary clas- siï¬er along with the weight associated with the loss. All models also use L2 regularization. The learning rate de- cay scheme is the exponential decay scheme used in [60]. Dropout is applied to the ï¬nal softmax matrix with proba- bility 0.5.
# B. Additional Experiments | 1707.07012#64 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 65 | 0.52*/0.37 0.44*/0.41 0.72*/0.24 4.86*/2.55 0.51*/0.25 3.39*/2.53 0.54*/0.28 0.76/0.13 64.67/19.07 0.78/0.82 93.74/34.98 53.27*/19.50 11.15*/4.37 16.39*/6.17 4.86*/0.64 1.50/0.26 1.69/1.12 0.16/0.11 5.29*/0.94 4.86/1.13 4.51/1.14 LOLS Avg / StDev 0.65*/0.27 0.59*/0.23 0.45*/0.29 0.34*/0.33 0.24*/0.32 0.58*/0.22 4.01*/2.07 0.30*/0.17 2.09*/1.73 0.41*/0.19 0.77/0.14 64.27/22.22 0.85/0.89 97.20/39.30 50.92*/18.74 10.52*/4.21 15.41*/5.92 4.94*/0.76 1.50/0.29 1.57/1.07 | 1707.06875#65 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.07012 | 65 | # B. Additional Experiments
We now present two additional cells that performed well on CIFAR and ImageNet. The search spaces used for these cells are slightly different than what was used for NASNet- A. For the NASNet-B model in Figure 8 we do not concate- nate all of the unused hidden states generated in the convo- lutional cell. Instead all of the hiddenstates created within the convolutional cell, even if they are currently used, are fed into the next layer. Note that B = 4 and there are 4 hid- denstates as input to the cell as these numbers must match for this cell to be valid. We also allow addition followed by layer normalization [2] or instance normalization [61] to be predicted as two of the combination operations within the cell, along with addition or concatenation.
For NASNet-C (Figure 9), we concatenate all of the un- used hidden states generated in the convolutional cell like in NASNet-A, but now we allow the prediction of addition followed by layer normalization or instance normalization like in NASNet-B. | 1707.07012#65 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.07012 | 67 | Reduction Cell
Figure 8. Architecture of NASNet-B convolutional cell with B = 4 blocks identiï¬ed with CIFAR-10. The input (white) is the hidden state from previous activations (or input image). Each convolu- tional cell is the result of B blocks. A single block is corresponds to two primitive operations (yellow) and a combination operation (green). As do we not concatenate the output hidden states, each output hidden state is used as a hidden state in the future layers. Each cell takes in 4 hidden states and thus needs to also create 4 output hidden states. Each output hidden state is therefore labeled with 0, 1, 2, 3 to represent the next four layers in that order.
# C. Example object detection results
Finally, we will present examples of object detection re- sults on the COCO dataset in Figure 10 and Figure 11. As can be seen from the ï¬gures, NASNet-A featurization works well with Faster-RCNN and gives accurate localiza- tion of objects.
iconcat| Normal Cell iconcat|
# Reduction Cel | 1707.07012#67 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 68 | T S E R F S L E T O H F S L E G A B S L O L G L N N R S L O L G L N N R S L O L l a u q t a n f n i l a u q t a n f n i l a u q t a n f n i l a u q t a n f n i l a u q t a n f n i * 4 1 . 0 - * 4 1 . 0 - * 6 1 . 0 - 8 0 . 0 - * 4 1 . 0 - 2 0 . 0 * 2 1 . 0 - * 0 2 . 0 - 6 0 . 0 - 8 0 . 0 - 9 0 . 0 - 3 0 . 0 - * 6 1 . 0 - * 9 1 . 0 - * 6 1 . 0 - * 3 1 . 0 * 5 1 . 0 * 9 1 . 0 6 0 . 0 * 2 1 . 0 2 0 . 0 6 0 . 0 * 2 1 . 0 1 0 . 0 8 0 . 0 * 9 0 . 0 9 0 . 0 3 1 . 0 * 5 1 . 0 3 1 . 0 * 8 0 . 0 * 0 1 . 0 * 4 1 . 0 7 0 . 0 * 3 1 . 0 1 0 . 0 7 0 . 0 * 2 1 . 0 0 0 . 0 7 0 . 0 * 9 0 . 0 8 0 . 0 1 1 . 0 * | 1707.06875#68 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.07012 | 68 | iconcat| Normal Cell iconcat|
# Reduction Cel
Figure 9. Architecture of NASNet-C convolutional cell with B = 4 blocks identiï¬ed with CIFAR-10. The input (white) is the hid- den state from previous activations (or input image). The output (pink) is the result of a concatenation operation across all result- ing branches. Each convolutional cell is the result of B blocks. A single block corresponds to two primitive operations (yellow) and a combination operation (green). | 1707.07012#68 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 69 | * 3 1 . 0 1 0 . 0 7 0 . 0 * 2 1 . 0 0 0 . 0 7 0 . 0 * 9 0 . 0 8 0 . 0 1 1 . 0 * 4 1 . 0 2 1 . 0 7 0 . 0 8 0 . 0 * 2 1 . 0 * 9 0 . 0 * 3 1 . 0 2 0 . 0 8 0 . 0 * 1 1 . 0 1 0 . 0 6 0 . 0 8 0 . 0 6 0 . 0 0 1 . 0 3 1 . 0 1 1 . 0 5 0 . 0 4 0 . 0 * 2 1 . 0 7 0 . 0 * 2 1 . 0 3 0 . 0 3 0 . 0 2 0 . 0 0 0 . 0 7 0 . 0 5 0 . 0 6 0 . 0 0 1 . 0 3 1 . 0 1 1 . 0 8 0 . 0 * 1 1 . 0 * 2 1 . 0 * 9 0 . 0 * 7 1 . 0 4 0 . 0 2 0 . 0 4 0 . 0 1 0 . 0 - 8 0 . 0 9 0 . 0 7 0 . 0 * 5 1 . 0 * 7 1 . 0 * 0 2 . 0 7 0 . 0 8 0 . 0 * 5 1 . 0 1 0 . 0 7 0 . 0 3 0 . 0 * 1 1 . 0 * 4 1 . 0 2 0 . 0 1 0 . 0 5 0 . 0 7 0 . 0 1 1 . 0 3 1 | 1707.06875#69 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.07012 | 69 | Figure 10. Example detections showing improvements of object detection over previous state-of-the-art model for Faster-RCNN with Inception-ResNet-v2 featurization [28] (top) and NASNet-A featurization (bottom).
Figure 11. Example detections of best performing NASNet-A fea- turization with Faster-RCNN trained on COCO dataset. Top and middle images courtesy of http://wikipedia.org. Bottom image courtesy of Jonathan Huang | 1707.07012#69 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | [
{
"id": "1708.02002"
},
{
"id": "1703.00441"
},
{
"id": "1707.06347"
},
{
"id": "1703.01513"
},
{
"id": "1605.07648"
},
{
"id": "1606.06216"
},
{
"id": "1703.04813"
},
{
"id": "1612.06851"
},
{
"id": "1611.02779"
},
{
"id": "1607.08022"
},
{
"id": "1704.08792"
},
{
"id": "1708.04552"
},
{
"id": "1703.00548"
},
{
"id": "1704.04861"
},
{
"id": "1607.06450"
},
{
"id": "1707.01083"
},
{
"id": "1709.01507"
},
{
"id": "1611.05763"
}
] |
1707.06875 | 70 | 7 0 . 0 3 0 . 0 * 1 1 . 0 * 4 1 . 0 2 0 . 0 1 0 . 0 5 0 . 0 7 0 . 0 1 1 . 0 3 1 . 0 * 6 1 . 0 * 8 1 . 0 * 7 1 . 0 * 8 2 . 0 2 0 . 0 - 5 0 . 0 0 0 . 0 * 0 1 . 0 * 7 1 . 0 * 4 1 . 0 3 0 . 0 3 0 . 0 3 0 . 0 4 0 . 0 - 2 0 . 0 7 0 . 0 - 8 0 . 0 * 1 1 . 0 * 0 1 . 0 3 0 . 0 * 2 1 . 0 2 0 . 0 9 0 . 0 * 3 1 . 0 3 0 . 0 0 0 . 0 7 0 . 0 7 0 . 0 * 4 1 . 0 * 9 1 . 0 * 4 1 . 0 * 7 1 . 0 * 9 1 . 0 * 3 2 . 0 * 9 0 . 0 * 6 1 . 0 6 0 . 0 4 0 . 0 6 0 . 0 5 0 . 0 * 0 1 . 0 * 0 1 . 0 7 0 . 0 * 6 1 . 0 * 8 1 . 0 * 0 2 . 0 2 0 . 0 1 0 . 0 * 9 1 . 0 * 8 0 . 0 - 6 0 . 0 - * 3 1 . 0 8 0 . 0 - 3 0 . 0 - 3 0 . 0 * 1 1 | 1707.06875#70 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.06875 | 71 | . 0 * 9 1 . 0 * 8 0 . 0 - 6 0 . 0 - * 3 1 . 0 8 0 . 0 - 3 0 . 0 - 3 0 . 0 * 1 1 . 0 - * 2 1 . 0 - 5 0 . 0 - 9 0 . 0 3 1 . 0 * 4 1 . 0 * 8 0 . 0 * 9 0 . 0 4 0 . 0 2 0 . 0 5 0 . 0 - 0 0 . 0 9 0 . 0 3 0 . 0 1 0 . 0 - * 0 1 . 0 3 0 . 0 0 0 . 0 4 0 . 0 4 0 . 0 - 9 0 . 0 - * 9 0 . 0 * 0 1 . 0 6 0 . 0 1 0 . 0 * 1 1 . 0 2 0 . 0 6 0 . 0 * 4 1 . 0 * 3 1 . 0 * 9 0 . 0 - 2 0 . 0 - 2 0 . 0 3 0 . 0 - 5 0 . 0 8 0 . 0 * 9 0 . 0 - * 4 1 . 0 - * 1 2 . 0 8 0 . 0 - * 7 1 . 0 - * 1 1 . 0 7 0 . 0 - 8 0 . 0 - * 2 1 . 0 9 0 . 0 - * 7 1 . 0 - 1 0 . 0 * 4 2 . 0 - * 9 1 . 0 - 4 0 . 0 * 1 1 . 0 - * 5 1 . 0 - * 8 1 . 0 | 1707.06875#71 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.06875 | 72 | . 0 - 1 0 . 0 * 4 2 . 0 - * 9 1 . 0 - 4 0 . 0 * 1 1 . 0 - * 5 1 . 0 - * 8 1 . 0 7 0 . 0 - * 9 1 . 0 - * 1 1 . 0 8 0 . 0 - * 2 1 . 0 - 8 0 . 0 5 0 . 0 - * 5 1 . 0 - 1 0 . 0 * 9 2 . 0 - * 8 2 . 0 - 5 0 . 0 - * 6 1 . 0 - * 1 2 . 0 - * 2 1 . 0 8 0 . 0 - * 7 1 . 0 - 7 0 . 0 * 6 1 . 0 - * 8 1 . 0 - 2 0 . 0 8 0 . 0 - * 6 1 . 0 - 2 0 . 0 - * 3 2 . 0 - * 7 1 . 0 - 3 0 . 0 * 1 1 . 0 - * 0 1 . 0 - * 4 1 . 0 - 3 0 . 0 - 6 0 . 0 7 0 . 0 - * 7 1 . 0 - * 0 1 . 0 - * 0 1 . 0 - * 1 1 . 0 - 2 0 . 0 - 8 0 . 0 - 2 0 . 0 9 0 . 0 0 1 . 0 * 1 1 . 0 - * 3 1 . 0 - 2 0 . 0 - 3 0 . 0 - 4 0 . 0 - 4 0 . 0 - * 4 1 . 0 - | 1707.06875#72 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.06875 | 73 | . 0 * 1 1 . 0 - * 3 1 . 0 - 2 0 . 0 - 3 0 . 0 - 4 0 . 0 - 4 0 . 0 - * 4 1 . 0 - 9 0 . 0 - 1 0 . 0 * 5 1 . 0 - * 0 1 . 0 - 7 0 . 0 - 2 1 . 0 - 3 0 . 0 - 1 1 . 0 7 0 . 0 - 5 0 . 0 - * 3 1 . 0 - 0 0 . 0 8 0 . 0 * 9 0 . 0 - 9 0 . 0 - 1 0 . 0 - 2 0 . 0 - * 6 1 . 0 - 6 0 . 0 - 9 0 . 0 - 8 0 . 0 * 5 1 . 0 * 6 1 . 0 7 0 . 0 1 0 . 0 * 2 1 . 0 6 0 . 0 - 2 0 . 0 5 0 . 0 3 0 . 0 - 2 0 . 0 - 5 0 . 0 8 0 . 0 - * 0 1 . 0 - 1 0 . 0 - 0 1 . 0 - 2 0 . 0 - 2 0 . 0 * 6 1 . 0 * 0 2 . 0 * 6 1 . 0 - * 1 1 . 0 * 5 1 . 0 * 1 1 . 0 - 4 0 . 0 5 0 . 0 * 3 1 . 0 - 9 0 . 0 * 3 1 . 0 2 0 . 0 - * 1 3 . 0 * 4 2 . 0 5 0 . 0 | 1707.06875#73 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.06875 | 74 | . 0 5 0 . 0 * 3 1 . 0 - 9 0 . 0 * 3 1 . 0 2 0 . 0 - * 1 3 . 0 * 4 2 . 0 5 0 . 0 , ) 5 0 . 0 < p ( n o i t a l e r r o c t n a c ï¬ i n g i s y l l a c i t s i t a t s s e t o n e d â * â . s m e t s y s d n a s t e s a t a d l a u d i v i d n i r o f s g n i t a r n a m u h d n a . t e s a t a d e m a s e h t n o s m e t s y s o w t g n i r a p m o c n e h w n o i t a l e r r o c r e g n o r t s | 1707.06875#74 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.06875 | 75 | # N E G T
# l a u q
t a n
# f n i
c i r t e m
6 1 . 0 9 1 . 0 1 2 . 0 # R E T
3 1 . 0
5 1 . 0
0 3 . 0
# 1 U E L B
4 1 . 0
7 1 . 0
0 3 . 0
# 2 U E L B
2 1 . 0
7 1 . 0
7 2 . 0
# 3 U E L B
1 1 . 0
5 1 . 0
3 2 . 0
# 4 U E L B
9 0 . 0
1 1 . 0
0 2 . 0
# E G U O R
= ___E10__ _60°0 _ 900° |___ aM WIS ISIN
9 1 . 0 2 0 . 0
3 1 . 0
6 0 . 0
0 1 . 0
9 0 . 0
7 0 . 0
4 1 . 0
2 1 . 0 2 1 . 0
7 0 . 0
9 0 . 0
4 0 . 0
9 0 . 0
4 2 . 0
6 1 . 0
9 2 . 0
7 1 . 0
6 2 . 0
6 0 . 0 3 0 . 0
# R O E T E M
# R O P E L
# r E D
# w p c
# T S I
# M
# E R
# I
# I S
# N
# C
1 2 . 0 5 2 . 0 5 2 . 0
# n e l | 1707.06875#75 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.06875 | 76 | # w p c
# T S I
# M
# E R
# I
# I S
# N
# C
1 2 . 0 5 2 . 0 5 2 . 0
# n e l
2 1 . 0 7 1 . 0 3 3 . 0
# s p w
7 1 . 0 0 2 . 0 5 2 . 0
# s p s
3 1 . 0 =
7 0 . 0 1 0 . 0
# w p s
7 0 . 0 6 0 . 0 6 1 . 0
# l o p
0 0 . 0
6 0 . 0
2 0 . 0 # w p p
1 1 . 0 =
6 0 . 0 2 0 . 0 # p s m
3 1 . 0
8 1 . 0
3 2 . 0 # s r p
s c i r t e m n e e w t e b n o i t a l e r r o c n a m r a e p S
: 9 e l b a T
y l t n a c ï¬ i n g i s
# s e t o n e d
# t n o f
# d l o b | 1707.06875#76 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.06875 | 77 | inf BAGEL nat qual inf SFHOTEL nat qual inf SFREST nat qual TER BLEU1 BLEU2 BLEU3 BLEU4 ROUGE NIST LEPOR CIDEr METEOR SIM RE cpw len wps sps spw pol ppw msp prs -0.19* 0.23* 0.21* 0.19* 0.18* 0.20* 0.21* 0.07 0.21* 0.25* 0.15* -0.08 0.05 0.14* 0.14* 0.14* 0.05 0.13* 0.06 0.02 -0.10 -0.19* 0.14* 0.15* 0.15* 0.14* 0.13* 0.09 0.07 0.16* 0.13* 0.09 0.03 -0.04 -0.22* -0.23* -0.19* 0.00 -0.05 0.11* -0.04 0.22* -0.15* 0.11* 0.12* 0.11* 0.10* 0.11* 0.06 0.01 0.12* 0.12* 0.07 0.09 -0.12* -0.24* -0.23* -0.21* -0.06 | 1707.06875#77 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.06875 | 78 | 0.12* 0.12* 0.07 0.09 -0.12* -0.24* -0.23* -0.21* -0.06 -0.10* 0.04 -0.11* 0.25* -0.10* 0.11* 0.10* 0.09* 0.08* 0.09* 0.07* 0.13* 0.10* 0.11* 0.01 0.01 0.07* 0.05 0.03 -0.01 -0.10* -0.04 -0.06 0.02 -0.05 -0.19* 0.18* 0.17* 0.16* 0.10* 0.15* 0.13* 0.15* 0.16* 0.15* -0.04 0.04 0.05 -0.14* -0.14* -0.18* -0.06 -0.10* -0.04 -0.06 0.12* -0.07* 0.07* 0.07* 0.07* 0.06 0.06 0.06 0.05 0.05 0.08* -0.09* 0.10* -0.02 -0.07* -0.06 -0.12* -0.14* -0.14* | 1707.06875#78 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
1707.06875 | 79 | -0.09* 0.10* -0.02 -0.07* -0.06 -0.12* -0.14* -0.14* -0.13* -0.06 0.07 -0.09* 0.11* 0.09* 0.08* 0.09* 0.09* 0.10* 0.16* 0.08* 0.15* 0.15* 0.02 0.04 0.16* 0.14* 0.10* -0.11* -0.03 -0.11* 0.08* -0.13* -0.15* 0.14* 0.13* 0.12* 0.09* 0.15* 0.08* 0.12* 0.12* 0.18* -0.02 0.02 0.10* -0.15* -0.17* -0.18* -0.02 -0.08* 0.01 0.01 0.18* -0.08* 0.07* 0.06* 0.06* 0.05 0.06* 0.03 0.04 0.04 0.11* -0.02 0.06 0.06 -0.09* -0.10* -0.12* -0.07* -0.08* -0.04 | 1707.06875#79 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | [
{
"id": "1612.07600"
},
{
"id": "1706.09254"
},
{
"id": "1509.00838"
},
{
"id": "1608.00339"
},
{
"id": "1606.05491"
},
{
"id": "1610.02124"
},
{
"id": "1603.01232"
},
{
"id": "1608.07076"
},
{
"id": "1603.08023"
},
{
"id": "1606.03254"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.