doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
1605.07427
17
P (x) = [x, 1/2 − ||x||2 Q(x) = [x, 0, 0, . . . , 0] 2, 1/2 − ||x||4 2, . . . , 1/2 − ||x||2m 2 ] (6) (7) We thus have the following approximation of MIPS by MCSS for any query vector q: (K) tT cK) Q(g)' Pla) argmax; q ~ argmax, TTT ‘ * lQ(@)ll2 - ||P(wa)|l2 (8) Once we convert MIPS to MCSS, we can use spherical K-means [12] or its hierarchical version to approximate and speedup the cosine similarity search. Once the memory is clustered, then every read operation requires only K dot-products, where K is the number of cluster centroids. Since this is an approximation, it is error-prone. As we are using this approximation for the learning process, this introduces some bias in gradients, which can affect the overall performance of HMN. To alleviate this bias, we propose three simple strategies.
1605.07427#17
Hierarchical Memory Networks
Memory networks are neural networks with an explicit memory component that can be both read and written to by the network. The memory is often addressed in a soft way using a softmax function, making end-to-end training with backpropagation possible. However, this is not computationally scalable for applications which require the network to read from extremely large memories. On the other hand, it is well known that hard attention mechanisms based on reinforcement learning are challenging to train successfully. In this paper, we explore a form of hierarchical memory network, which can be considered as a hybrid between hard and soft attention memory networks. The memory is organized in a hierarchical structure such that reading from it is done with less computation than soft attention over a flat memory, while also being easier to train than hard attention over a flat memory. Specifically, we propose to incorporate Maximum Inner Product Search (MIPS) in the training and inference procedures for our hierarchical memory network. We explore the use of various state-of-the art approximate MIPS techniques and report results on SimpleQuestions, a challenging large scale factoid question answering task.
http://arxiv.org/pdf/1605.07427
Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, Yoshua Bengio
stat.ML, cs.CL, cs.LG, cs.NE
10 pages
null
stat.ML
20160524
20160524
[ { "id": "1507.05910" }, { "id": "1502.05698" }, { "id": "1503.08895" }, { "id": "1506.02075" } ]
1605.07678
17
As the spoiler in section 3.1 gave already away, the linear nature of the accuracy vs. throughput relationship translates into a hyperbolical one when the forward inference time is considered instead. Then, given that the operations count is linear with the inference time, we get that the accuracy has an hyperbolical dependency on the amount of computations that a network requires. 3.8 PARAMETERS UTILISATION
1605.07678#17
An Analysis of Deep Neural Network Models for Practical Applications
Since the emergence of Deep Neural Networks (DNNs) as a prominent technique in the field of computer vision, the ImageNet classification challenge has played a major role in advancing the state-of-the-art. While accuracy figures have steadily increased, the resource utilisation of winning models has not been properly taken into account. In this work, we present a comprehensive analysis of important metrics in practical applications: accuracy, memory footprint, parameters, operations count, inference time and power consumption. Key findings are: (1) power consumption is independent of batch size and architecture; (2) accuracy and inference time are in a hyperbolic relationship; (3) energy constraint is an upper bound on the maximum achievable accuracy and model complexity; (4) the number of operations is a reliable estimate of the inference time. We believe our analysis provides a compelling set of information that helps design and engineer efficient DNNs.
http://arxiv.org/pdf/1605.07678
Alfredo Canziani, Adam Paszke, Eugenio Culurciello
cs.CV
7 pages, 10 figures, legend for Figure 2 got lost :/
null
cs.CV
20160524
20170414
[ { "id": "1602.07261" }, { "id": "1606.02147" }, { "id": "1512.03385" }, { "id": "1512.00567" }, { "id": "1510.00149" } ]
1605.07683
17
Using the KB, conversations are generated in the format shown in Figure 1. Each example is a dialog comprising utterances from a user and a bot, as well as API calls and the resulting facts. Dialogs are generated after creating a user request by sampling an entry for each of the four required fields: e.g. the request in Figure 1 is [cuisine: British, location: London, party size: six, price range: expensive]. We use natural language patterns to create user and bot utterances. There are 43 patterns for the user and 20 for the bot (the user can use up to 4 ways to say something, while the bot always uses the same). Those patterns are combined with the KB entities to form thousands of different utterances. 3.1.1 TASK DEFINITIONS We now detail each task. Tasks 1 and 2 test dialog management to see if end-to-end systems can learn to implicitly track dialog state (never given explicitly), whereas Task 3 and 4 check if they can learn to use KB facts in a dialog setting. Task 3 also requires to learn to sort. Task 5 combines all tasks.
1605.07683#17
Learning End-to-End Goal-Oriented Dialog
Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End-to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
http://arxiv.org/pdf/1605.07683
Antoine Bordes, Y-Lan Boureau, Jason Weston
cs.CL
Accepted as a conference paper at ICLR 2017
null
cs.CL
20160524
20170330
[ { "id": "1512.05742" }, { "id": "1508.03386" }, { "id": "1605.05414" }, { "id": "1508.03391" }, { "id": "1508.01745" }, { "id": "1502.05698" }, { "id": "1503.02364" }, { "id": "1506.08909" }, { "id": "1603.08023" }, { "id": "1506.05869" } ]
1605.07427
18
• Instead of using only the top-K candidates for a single read query, we also add top-K candidates retrieved for every other read query in the mini-batch. This serves two purposes. First, we can do efficient matrix multiplications by leveraging GPUs since all the K-softmax in a minibatch are over the same set of elements. Second, this also helps to decrease the bias introduced by the approximation error. • For every read access, instead of only using the top few clusters which has a maximum product with the read query, we also sample some clusters from the rest, based on a probability distribution log-proportional to the dot product with the cluster centroids. This also decreases the bias. • We can also sample random blocks of memory and add it to top-K candidates. We empirically investigate the effect of these variations in Section 5.5. # 4 Related Work
1605.07427#18
Hierarchical Memory Networks
Memory networks are neural networks with an explicit memory component that can be both read and written to by the network. The memory is often addressed in a soft way using a softmax function, making end-to-end training with backpropagation possible. However, this is not computationally scalable for applications which require the network to read from extremely large memories. On the other hand, it is well known that hard attention mechanisms based on reinforcement learning are challenging to train successfully. In this paper, we explore a form of hierarchical memory network, which can be considered as a hybrid between hard and soft attention memory networks. The memory is organized in a hierarchical structure such that reading from it is done with less computation than soft attention over a flat memory, while also being easier to train than hard attention over a flat memory. Specifically, we propose to incorporate Maximum Inner Product Search (MIPS) in the training and inference procedures for our hierarchical memory network. We explore the use of various state-of-the art approximate MIPS techniques and report results on SimpleQuestions, a challenging large scale factoid question answering task.
http://arxiv.org/pdf/1605.07427
Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, Yoshua Bengio
stat.ML, cs.CL, cs.LG, cs.NE
10 pages
null
stat.ML
20160524
20160524
[ { "id": "1507.05910" }, { "id": "1502.05698" }, { "id": "1503.08895" }, { "id": "1506.02075" } ]
1605.07678
18
3.8 PARAMETERS UTILISATION DNNs are known to be highly inefficient in utilising their full learning power (number of parameters / degrees of freedom). Prominent work (Han et al., 2015) exploits this flaw to reduce network file size up to 50×, using weights pruning, quantisation and variable-length symbol encoding. It is worth noticing that, using more efficient architectures to begin with may produce even more compact representations. In figure 10 we clearly see that, although VGG has a better accuracy than AlexNet (as shown by figure 1), its information density is worse. This means that the amount of degrees of freedom introduced in the VGG architecture bring a lesser improvement in terms of accuracy. Moreover, ENet (Paszke et al., 2016) — which we have specifically designed to be highly efficient and it has been adapted and retrained on ImageNet (Culurciello, 2016) for this work — achieves the highest score, showing that 24× less parameters are sufficient to provide state-of-the-art results. # 4 CONCLUSIONS
1605.07678#18
An Analysis of Deep Neural Network Models for Practical Applications
Since the emergence of Deep Neural Networks (DNNs) as a prominent technique in the field of computer vision, the ImageNet classification challenge has played a major role in advancing the state-of-the-art. While accuracy figures have steadily increased, the resource utilisation of winning models has not been properly taken into account. In this work, we present a comprehensive analysis of important metrics in practical applications: accuracy, memory footprint, parameters, operations count, inference time and power consumption. Key findings are: (1) power consumption is independent of batch size and architecture; (2) accuracy and inference time are in a hyperbolic relationship; (3) energy constraint is an upper bound on the maximum achievable accuracy and model complexity; (4) the number of operations is a reliable estimate of the inference time. We believe our analysis provides a compelling set of information that helps design and engineer efficient DNNs.
http://arxiv.org/pdf/1605.07678
Alfredo Canziani, Adam Paszke, Eugenio Culurciello
cs.CV
7 pages, 10 figures, legend for Figure 2 got lost :/
null
cs.CV
20160524
20170414
[ { "id": "1602.07261" }, { "id": "1606.02147" }, { "id": "1512.03385" }, { "id": "1512.00567" }, { "id": "1510.00149" } ]
1605.07683
18
Task 1: Issuing API calls A user request implicitly defines a query that can contain from 0 to 4 of the required fields (sampled uniformly; in Figure 1, it contains 3). The bot must ask questions for filling the missing fields and eventually generate the correct corresponding API call. The bot asks for information in a deterministic order, making prediction possible. Task 2: Updating API calls Starting by issuing an API call as in Task 1, users then ask to update their requests between 1 and 4 times (sampled uniformly). The order in which fields are updated is random. The bot must ask users if they are done with their updates and issue the updated API call. Task 3: Displaying options Given a user request, we query the KB using the corresponding API call and add the facts resulting from the call to the dialog history. The bot must propose options to users by listing the restaurant names sorted by their corresponding rating (from higher to lower) until users accept. For each option, users have a 25% chance of accepting. If they do, the bot must stop displaying options, otherwise propose the next one. Users always accept the option if this is the last remaining one. We only keep examples with API calls retrieving at least 3 options.
1605.07683#18
Learning End-to-End Goal-Oriented Dialog
Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End-to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
http://arxiv.org/pdf/1605.07683
Antoine Bordes, Y-Lan Boureau, Jason Weston
cs.CL
Accepted as a conference paper at ICLR 2017
null
cs.CL
20160524
20170330
[ { "id": "1512.05742" }, { "id": "1508.03386" }, { "id": "1605.05414" }, { "id": "1508.03391" }, { "id": "1508.01745" }, { "id": "1502.05698" }, { "id": "1503.02364" }, { "id": "1506.08909" }, { "id": "1603.08023" }, { "id": "1506.05869" } ]
1605.07427
19
• We can also sample random blocks of memory and add it to top-K candidates. We empirically investigate the effect of these variations in Section 5.5. # 4 Related Work Memory networks have been introduced in [2] and have been so far applied to comprehension-based question answering [13, 14], large scale question answering [4] and dialogue systems [15]. While [2] considered supervised memory networks in which the correct supporting fact is given during the training stage, [14] introduced semi-supervised memory networks that can learn the supporting fact by itself. [3, 16] introduced Dynamic Memory Networks (DMNs) which can be considered as a memory network with two types of memory: a regular large memory and an episodic memory. Another related class of model is the Neural Turing Machine [1], which is uses softmax-based soft attention. Later [17] extended NTM to hard attention using reinforcement learning. [15, 4] alleviate the problem of the scalability of soft attention by having an initial keyword based filtering stage, which reduces the number of facts being considered. Our work generalizes this filtering by using MIPS for filtering. This is desirable because MIPS can be applied for any modality of data or even when there is no overlap between the words in a question and the words in facts.
1605.07427#19
Hierarchical Memory Networks
Memory networks are neural networks with an explicit memory component that can be both read and written to by the network. The memory is often addressed in a soft way using a softmax function, making end-to-end training with backpropagation possible. However, this is not computationally scalable for applications which require the network to read from extremely large memories. On the other hand, it is well known that hard attention mechanisms based on reinforcement learning are challenging to train successfully. In this paper, we explore a form of hierarchical memory network, which can be considered as a hybrid between hard and soft attention memory networks. The memory is organized in a hierarchical structure such that reading from it is done with less computation than soft attention over a flat memory, while also being easier to train than hard attention over a flat memory. Specifically, we propose to incorporate Maximum Inner Product Search (MIPS) in the training and inference procedures for our hierarchical memory network. We explore the use of various state-of-the art approximate MIPS techniques and report results on SimpleQuestions, a challenging large scale factoid question answering task.
http://arxiv.org/pdf/1605.07427
Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, Yoshua Bengio
stat.ML, cs.CL, cs.LG, cs.NE
10 pages
null
stat.ML
20160524
20160524
[ { "id": "1507.05910" }, { "id": "1502.05698" }, { "id": "1503.08895" }, { "id": "1506.02075" } ]
1605.07678
19
# 4 CONCLUSIONS In this paper we analysed multiple state-of-the-art deep neural networks submitted to the ImageNet challenge, in terms of accuracy, memory footprint, parameters, operations count, inference time and power consumption. Our goal is to provide insights into the design choices that can lead to efficient neural networks for practical application, and optimisation of the often-limited resources in actual deployments, which lead us to the creation of ENet — or Efficient-Network — for ImageNet. We show that accuracy and inference time are in a hyperbolic relationship: a little increment in accuracy costs a lot of computational time. We show that number of operations in a network model can effectively estimate inference time. We show that an energy constraint will set a specific upper bound on the maximum achievable accuracy and model complexity, in terms of operations counts. Finally, we show that ENet is the best architecture in terms of parameters space utilisation, squeezing up to 13× more information per parameter used respect to the reference model AlexNet, and 24× respect VGG-19. 6 # ACKNOWLEDGMENTS
1605.07678#19
An Analysis of Deep Neural Network Models for Practical Applications
Since the emergence of Deep Neural Networks (DNNs) as a prominent technique in the field of computer vision, the ImageNet classification challenge has played a major role in advancing the state-of-the-art. While accuracy figures have steadily increased, the resource utilisation of winning models has not been properly taken into account. In this work, we present a comprehensive analysis of important metrics in practical applications: accuracy, memory footprint, parameters, operations count, inference time and power consumption. Key findings are: (1) power consumption is independent of batch size and architecture; (2) accuracy and inference time are in a hyperbolic relationship; (3) energy constraint is an upper bound on the maximum achievable accuracy and model complexity; (4) the number of operations is a reliable estimate of the inference time. We believe our analysis provides a compelling set of information that helps design and engineer efficient DNNs.
http://arxiv.org/pdf/1605.07678
Alfredo Canziani, Adam Paszke, Eugenio Culurciello
cs.CV
7 pages, 10 figures, legend for Figure 2 got lost :/
null
cs.CV
20160524
20170414
[ { "id": "1602.07261" }, { "id": "1606.02147" }, { "id": "1512.03385" }, { "id": "1512.00567" }, { "id": "1510.00149" } ]
1605.07683
19
Task 4: Providing extra information Given a user request, we sample a restaurant and start the dialog as if users had agreed to book a table there. We add all KB facts corresponding to it to the dialog. Users then ask for the phone number of the restaurant, its address or both, with proportions 25%, 25% and 50% respectively. The bot must learn to use the KB facts correctly to answer. Task 5: Conducting full dialogs We combine Tasks 1-4 to generate full dialogs just as in Figure 1. Unlike in Task 3, we keep examples if API calls return at least 1 option instead of 3. 3.1.2 DATASETS
1605.07683#19
Learning End-to-End Goal-Oriented Dialog
Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End-to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
http://arxiv.org/pdf/1605.07683
Antoine Bordes, Y-Lan Boureau, Jason Weston
cs.CL
Accepted as a conference paper at ICLR 2017
null
cs.CL
20160524
20170330
[ { "id": "1512.05742" }, { "id": "1508.03386" }, { "id": "1605.05414" }, { "id": "1508.03391" }, { "id": "1508.01745" }, { "id": "1502.05698" }, { "id": "1503.02364" }, { "id": "1506.08909" }, { "id": "1603.08023" }, { "id": "1506.05869" } ]
1605.07427
20
The softmax arises in various situations and most relevant to this work are scaling methods for large vocabulary neural language modeling. In neural language modeling, the final layer is a softmax distribution over the next word and there exist several approaches to achieve scalability. [18] proposes a hierarchical softmax based on prior clustering of the words into a binary, or more generally n-ary tree, that serves as a fixed structure for the learning process of the model. The complexity of training 5
1605.07427#20
Hierarchical Memory Networks
Memory networks are neural networks with an explicit memory component that can be both read and written to by the network. The memory is often addressed in a soft way using a softmax function, making end-to-end training with backpropagation possible. However, this is not computationally scalable for applications which require the network to read from extremely large memories. On the other hand, it is well known that hard attention mechanisms based on reinforcement learning are challenging to train successfully. In this paper, we explore a form of hierarchical memory network, which can be considered as a hybrid between hard and soft attention memory networks. The memory is organized in a hierarchical structure such that reading from it is done with less computation than soft attention over a flat memory, while also being easier to train than hard attention over a flat memory. Specifically, we propose to incorporate Maximum Inner Product Search (MIPS) in the training and inference procedures for our hierarchical memory network. We explore the use of various state-of-the art approximate MIPS techniques and report results on SimpleQuestions, a challenging large scale factoid question answering task.
http://arxiv.org/pdf/1605.07427
Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, Yoshua Bengio
stat.ML, cs.CL, cs.LG, cs.NE
10 pages
null
stat.ML
20160524
20160524
[ { "id": "1507.05910" }, { "id": "1502.05698" }, { "id": "1503.08895" }, { "id": "1506.02075" } ]
1605.07678
20
6 # ACKNOWLEDGMENTS This paper would have not look so pretty without the Python Software Foundation, the matplot- lib library and the communities of stackoverflow and TEX of StackExchange which I ought to thank. This work is partly supported by the Office of Naval Research (ONR) grants N00014-12-1- 0167, N00014-15-1-2791 and MURI N00014-10-1-0278. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the TX1, Titan X, K40 GPUs used for this research. # REFERENCES Sharan Chetlur, Cliff Woolley, Philippe Vandermersch, Jonathan Cohen, John Tran, Bryan Catanzaro, and Evan Shelhamer. cuDNN: Efficient Primitives for Deep Learning. arXiv.org arXiv:1410.0759, 2014. Ronan Collobert, Koray Kavukcuoglu, and Cl´ement Farabet. Torch7: A matlab-like environment for machine learning. In BigLearn, NIPS Workshop, number EPFL-CONF-192376, 2011.
1605.07678#20
An Analysis of Deep Neural Network Models for Practical Applications
Since the emergence of Deep Neural Networks (DNNs) as a prominent technique in the field of computer vision, the ImageNet classification challenge has played a major role in advancing the state-of-the-art. While accuracy figures have steadily increased, the resource utilisation of winning models has not been properly taken into account. In this work, we present a comprehensive analysis of important metrics in practical applications: accuracy, memory footprint, parameters, operations count, inference time and power consumption. Key findings are: (1) power consumption is independent of batch size and architecture; (2) accuracy and inference time are in a hyperbolic relationship; (3) energy constraint is an upper bound on the maximum achievable accuracy and model complexity; (4) the number of operations is a reliable estimate of the inference time. We believe our analysis provides a compelling set of information that helps design and engineer efficient DNNs.
http://arxiv.org/pdf/1605.07678
Alfredo Canziani, Adam Paszke, Eugenio Culurciello
cs.CV
7 pages, 10 figures, legend for Figure 2 got lost :/
null
cs.CV
20160524
20170414
[ { "id": "1602.07261" }, { "id": "1606.02147" }, { "id": "1512.03385" }, { "id": "1512.00567" }, { "id": "1510.00149" } ]
1605.07683
20
3.1.2 DATASETS We want to test how well models handle entities appearing in the KB but not in the dialog training sets. We split types of cuisine and locations in half, and create two KBs, one with all facts about restaurants within the first halves and one with the rest. This yields two KBs of 4,200 facts and 600 restaurants each (5 types of cuisine × 5 locations × 3 price ranges × 8 ratings) that only share price ranges, ratings and party sizes, but have disjoint sets of restaurants, locations, types of cuisine, phones and addresses. We use one of the KBs to generate the standard training, validation and test dialogs, and use the other KB only to generate test dialogs, termed Out-Of-Vocabulary (OOV) test sets. For training, systems have access to the training examples and both KBs. We then evaluate on both test sets, plain and OOV. Beyond the intrinsic difficulty of each task, the challenge on the OOV test 4 Published as a conference paper at ICLR 2017 sets is for models to generalize to new entities (restaurants, locations and cuisine types) unseen in any training dialog – something natively impossible for embedding methods. Ideally, models could, for instance, leverage information coming from the entities of the same type seen during training.
1605.07683#20
Learning End-to-End Goal-Oriented Dialog
Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End-to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
http://arxiv.org/pdf/1605.07683
Antoine Bordes, Y-Lan Boureau, Jason Weston
cs.CL
Accepted as a conference paper at ICLR 2017
null
cs.CL
20160524
20170330
[ { "id": "1512.05742" }, { "id": "1508.03386" }, { "id": "1605.05414" }, { "id": "1508.03391" }, { "id": "1508.01745" }, { "id": "1502.05698" }, { "id": "1503.02364" }, { "id": "1506.08909" }, { "id": "1603.08023" }, { "id": "1506.05869" } ]
1605.07427
21
5 is reduced from O(n) to O(log n). Due to its clustering and tree structure, it resembles the clustering- based MIPS techniques we explore in this paper. However, the approaches differ at a fundamental level. Hierarchical softmax defines the probability of a leaf node as the product of all the probabilities computed by all the intermediate softmaxes on the way to that leaf node. By contrast, an approximate MIPS search imposes no such constraining structure on the probabilistic model, and is better thought as efficiently searching for top winners of what amounts to be a large ordinary flat softmax. Other methods such as Noice Constrastive Estimation [19] and Negative Sampling [20] avoid an expensive normalization constant by sampling negative samples from some marginal distribution. By contrast, our approach approximates the softmax by explicitly including in its negative samples candidates that likely would have a large softmax value. [21] introduces an importance sampling approach that considers all the words in a mini-batch as the candidate set. This in general might also not include the MIPS candidates with highest softmax values.
1605.07427#21
Hierarchical Memory Networks
Memory networks are neural networks with an explicit memory component that can be both read and written to by the network. The memory is often addressed in a soft way using a softmax function, making end-to-end training with backpropagation possible. However, this is not computationally scalable for applications which require the network to read from extremely large memories. On the other hand, it is well known that hard attention mechanisms based on reinforcement learning are challenging to train successfully. In this paper, we explore a form of hierarchical memory network, which can be considered as a hybrid between hard and soft attention memory networks. The memory is organized in a hierarchical structure such that reading from it is done with less computation than soft attention over a flat memory, while also being easier to train than hard attention over a flat memory. Specifically, we propose to incorporate Maximum Inner Product Search (MIPS) in the training and inference procedures for our hierarchical memory network. We explore the use of various state-of-the art approximate MIPS techniques and report results on SimpleQuestions, a challenging large scale factoid question answering task.
http://arxiv.org/pdf/1605.07427
Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, Yoshua Bengio
stat.ML, cs.CL, cs.LG, cs.NE
10 pages
null
stat.ML
20160524
20160524
[ { "id": "1507.05910" }, { "id": "1502.05698" }, { "id": "1503.08895" }, { "id": "1506.02075" } ]
1605.07678
21
Eugenio Culurciello. Training enet. https://culurciello.github.io/tech/2016/06/20/ training-enet.html, 2016. Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097–1105, 2012. Min Lin, Qiang Chen, and Shuicheng Yan. Network in network. arXiv preprint arXiv:1312.4400, 2013. # nVIDIA. Jetson tx1 module. http://www.nvidia.com/object/jetson-tx1-module.html. Adam Paszke. torch-opcounter. https://github.com/apaszke/torch-opCounter, 2016.
1605.07678#21
An Analysis of Deep Neural Network Models for Practical Applications
Since the emergence of Deep Neural Networks (DNNs) as a prominent technique in the field of computer vision, the ImageNet classification challenge has played a major role in advancing the state-of-the-art. While accuracy figures have steadily increased, the resource utilisation of winning models has not been properly taken into account. In this work, we present a comprehensive analysis of important metrics in practical applications: accuracy, memory footprint, parameters, operations count, inference time and power consumption. Key findings are: (1) power consumption is independent of batch size and architecture; (2) accuracy and inference time are in a hyperbolic relationship; (3) energy constraint is an upper bound on the maximum achievable accuracy and model complexity; (4) the number of operations is a reliable estimate of the inference time. We believe our analysis provides a compelling set of information that helps design and engineer efficient DNNs.
http://arxiv.org/pdf/1605.07678
Alfredo Canziani, Adam Paszke, Eugenio Culurciello
cs.CV
7 pages, 10 figures, legend for Figure 2 got lost :/
null
cs.CV
20160524
20170414
[ { "id": "1602.07261" }, { "id": "1606.02147" }, { "id": "1512.03385" }, { "id": "1512.00567" }, { "id": "1510.00149" } ]
1605.07683
21
We generate five datasets, one per task defined in 3.1.1. Table 1 gives their statistics. Training sets are relatively small (1,000 examples) to create realistic learning conditions. The dialogs from the training and test sets are different, never being based on the same user requests. Thus, we test if models can generalize to new combinations of fields. Dialog systems are evaluated in a ranking, not a generation, setting: at each turn of the dialog, we test whether they can predict bot utterances and API calls by selecting a candidate, not by generating it.1 Candidates are ranked from a set of all bot utterances and API calls appearing in training, validation and test sets (plain and OOV) for all tasks combined. 3.2 DIALOG STATE TRACKING CHALLENGE
1605.07683#21
Learning End-to-End Goal-Oriented Dialog
Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End-to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
http://arxiv.org/pdf/1605.07683
Antoine Bordes, Y-Lan Boureau, Jason Weston
cs.CL
Accepted as a conference paper at ICLR 2017
null
cs.CL
20160524
20170330
[ { "id": "1512.05742" }, { "id": "1508.03386" }, { "id": "1605.05414" }, { "id": "1508.03391" }, { "id": "1508.01745" }, { "id": "1502.05698" }, { "id": "1503.02364" }, { "id": "1506.08909" }, { "id": "1603.08023" }, { "id": "1506.05869" } ]
1605.07427
22
[22] is the only work that we know of, proposing to use MIPS during learning. It proposes hashing- based MIPS to sort the hidden layer activations and reduce the computation in every layer. However, a small scale application was considered and data-independent methods like hashing will likely suffer as dimensionality increases. # 5 Experiments In this section, we report experiments on factoid question answering using hierarchical memory networks. Specifically, we use the SimpleQuestions dataset [4]. The aim of these experiments is not to achieve state-of-the-art results on this dataset. Rather, we aim to propose and analyze various approaches to make memory networks more scalable and explore the achieved tradeoffs between speed and accuracy. # 5.1 Dataset
1605.07427#22
Hierarchical Memory Networks
Memory networks are neural networks with an explicit memory component that can be both read and written to by the network. The memory is often addressed in a soft way using a softmax function, making end-to-end training with backpropagation possible. However, this is not computationally scalable for applications which require the network to read from extremely large memories. On the other hand, it is well known that hard attention mechanisms based on reinforcement learning are challenging to train successfully. In this paper, we explore a form of hierarchical memory network, which can be considered as a hybrid between hard and soft attention memory networks. The memory is organized in a hierarchical structure such that reading from it is done with less computation than soft attention over a flat memory, while also being easier to train than hard attention over a flat memory. Specifically, we propose to incorporate Maximum Inner Product Search (MIPS) in the training and inference procedures for our hierarchical memory network. We explore the use of various state-of-the art approximate MIPS techniques and report results on SimpleQuestions, a challenging large scale factoid question answering task.
http://arxiv.org/pdf/1605.07427
Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, Yoshua Bengio
stat.ML, cs.CL, cs.LG, cs.NE
10 pages
null
stat.ML
20160524
20160524
[ { "id": "1507.05910" }, { "id": "1502.05698" }, { "id": "1503.08895" }, { "id": "1506.02075" } ]
1605.07678
22
Adam Paszke. torch-opcounter. https://github.com/apaszke/torch-opCounter, 2016. Adam Paszke, Abhishek Chaurasia, Sangpil Kim, and Eugenio Culurciello. Enet: A deep neural network architecture for real-time semantic segmentation. arXiv preprint arXiv:1606.02147, 2016. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, An- drej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211–252, 2015. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Er- arXiv preprint han, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. arXiv:1409.4842, 2014.
1605.07678#22
An Analysis of Deep Neural Network Models for Practical Applications
Since the emergence of Deep Neural Networks (DNNs) as a prominent technique in the field of computer vision, the ImageNet classification challenge has played a major role in advancing the state-of-the-art. While accuracy figures have steadily increased, the resource utilisation of winning models has not been properly taken into account. In this work, we present a comprehensive analysis of important metrics in practical applications: accuracy, memory footprint, parameters, operations count, inference time and power consumption. Key findings are: (1) power consumption is independent of batch size and architecture; (2) accuracy and inference time are in a hyperbolic relationship; (3) energy constraint is an upper bound on the maximum achievable accuracy and model complexity; (4) the number of operations is a reliable estimate of the inference time. We believe our analysis provides a compelling set of information that helps design and engineer efficient DNNs.
http://arxiv.org/pdf/1605.07678
Alfredo Canziani, Adam Paszke, Eugenio Culurciello
cs.CV
7 pages, 10 figures, legend for Figure 2 got lost :/
null
cs.CV
20160524
20170414
[ { "id": "1602.07261" }, { "id": "1606.02147" }, { "id": "1512.03385" }, { "id": "1512.00567" }, { "id": "1510.00149" } ]
1605.07683
22
3.2 DIALOG STATE TRACKING CHALLENGE Since our tasks rely on synthetically generated language for the user, we supplement our dataset with real human-bot dialogs. We use data from DSTC2 (Henderson et al., 2014a), that is also in the restaurant booking domain. Unlike our tasks, its user requests only require 3 fields: type of cuisine (91 choices), location (5 choices) and price range (3 choices). The dataset was originally designed for dialog state tracking hence every dialog turn is labeled with a state (a user intent + slots) to be predicted. As our goal is to evaluate end-to-end training, we did not use that, but instead converted the data into the format of our 5 tasks and included it in the dataset as Task 6. We used the provided speech transcriptions to create the user and bot utterances, and given the dialog states we created the API calls to the KB and their outputs which we added to the dialogs. We also added ratings to the restaurants returned by the API calls, so that the options proposed by the bots can be consistently predicted (by using the highest rating). We did use the original test set but use a slightly different training/validation split. Our evaluation differs from the challenge (we do not predict the dialog state), so we cannot compare with the results from (Henderson et al., 2014a).
1605.07683#22
Learning End-to-End Goal-Oriented Dialog
Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End-to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
http://arxiv.org/pdf/1605.07683
Antoine Bordes, Y-Lan Boureau, Jason Weston
cs.CL
Accepted as a conference paper at ICLR 2017
null
cs.CL
20160524
20170330
[ { "id": "1512.05742" }, { "id": "1508.03386" }, { "id": "1605.05414" }, { "id": "1508.03391" }, { "id": "1508.01745" }, { "id": "1502.05698" }, { "id": "1503.02364" }, { "id": "1506.08909" }, { "id": "1603.08023" }, { "id": "1506.05869" } ]
1605.07427
23
# 5.1 Dataset We use SimpleQuestions [4] which is a large scale factoid question answering dataset. SimpleQues- tions consists of 108,442 natural language questions, each paired with a corresponding fact from Freebase. Each fact is a triple (subject,relation,object) and the answer to the question is always the ob- ject. The dataset is divided into training (75910), validation (10845), and test (21687) sets. Unlike [4] who additionally considered FB2M (10M facts) or FB5M (12M facts) with keyword-based heuristics for filtering most of the facts for each question, we only use SimpleQuestions, with no keyword-based heuristics. This allows us to do a direct comparison with the full softmax approach in a reasonable amount of time. Moreover, we would like to highlight that for this dataset, keyword-based filtering is a very efficient heuristic since all questions have an appropriate source entity with a matching word. Nevertheless, our goal is to design a general purpose architecture without such strong assumptions on the nature of the data. # 5.2 Model
1605.07427#23
Hierarchical Memory Networks
Memory networks are neural networks with an explicit memory component that can be both read and written to by the network. The memory is often addressed in a soft way using a softmax function, making end-to-end training with backpropagation possible. However, this is not computationally scalable for applications which require the network to read from extremely large memories. On the other hand, it is well known that hard attention mechanisms based on reinforcement learning are challenging to train successfully. In this paper, we explore a form of hierarchical memory network, which can be considered as a hybrid between hard and soft attention memory networks. The memory is organized in a hierarchical structure such that reading from it is done with less computation than soft attention over a flat memory, while also being easier to train than hard attention over a flat memory. Specifically, we propose to incorporate Maximum Inner Product Search (MIPS) in the training and inference procedures for our hierarchical memory network. We explore the use of various state-of-the art approximate MIPS techniques and report results on SimpleQuestions, a challenging large scale factoid question answering task.
http://arxiv.org/pdf/1605.07427
Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, Yoshua Bengio
stat.ML, cs.CL, cs.LG, cs.NE
10 pages
null
stat.ML
20160524
20160524
[ { "id": "1507.05910" }, { "id": "1502.05698" }, { "id": "1503.08895" }, { "id": "1506.02075" } ]
1605.07678
23
Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. arXiv preprint arXiv:1512.00567, 2015. Christian Szegedy, Sergey Ioffe, and Vincent Vanhoucke. Inception-v4, inception-resnet and the impact of residual connections on learning. arXiv preprint arXiv:1602.07261, 2016. Sergey Zagoruyko. imagenet-validation.torch. https://github.com/szagoruyko/imagenet- validation.torch, 2016. 7
1605.07678#23
An Analysis of Deep Neural Network Models for Practical Applications
Since the emergence of Deep Neural Networks (DNNs) as a prominent technique in the field of computer vision, the ImageNet classification challenge has played a major role in advancing the state-of-the-art. While accuracy figures have steadily increased, the resource utilisation of winning models has not been properly taken into account. In this work, we present a comprehensive analysis of important metrics in practical applications: accuracy, memory footprint, parameters, operations count, inference time and power consumption. Key findings are: (1) power consumption is independent of batch size and architecture; (2) accuracy and inference time are in a hyperbolic relationship; (3) energy constraint is an upper bound on the maximum achievable accuracy and model complexity; (4) the number of operations is a reliable estimate of the inference time. We believe our analysis provides a compelling set of information that helps design and engineer efficient DNNs.
http://arxiv.org/pdf/1605.07678
Alfredo Canziani, Adam Paszke, Eugenio Culurciello
cs.CV
7 pages, 10 figures, legend for Figure 2 got lost :/
null
cs.CV
20160524
20170414
[ { "id": "1602.07261" }, { "id": "1606.02147" }, { "id": "1512.03385" }, { "id": "1512.00567" }, { "id": "1510.00149" } ]
1605.07683
23
This dataset has similar statistics to our Task 5 (see Table 1) but is harder. The dialogs are noisier and the bots made mistakes due to speech recognition errors or misinterpretations and also do not always have a deterministic behavior (the order in which they can ask for information varies). 3.3 ONLINE CONCIERGE SERVICE Tasks 1-6 are, at least partially, artificial. This provides perfect control over their design (at least for Tasks 1-5), but no guarantee that good performance would carry over from such synthetic to more realistic conditions. To quantify this, we also evaluate the models from Section 4 on data extracted from a real online concierge service performing restaurant booking: users make requests through a text-based chat interface that are handled by human operators who can make API calls. All conversations are between native English speakers.
1605.07683#23
Learning End-to-End Goal-Oriented Dialog
Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End-to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
http://arxiv.org/pdf/1605.07683
Antoine Bordes, Y-Lan Boureau, Jason Weston
cs.CL
Accepted as a conference paper at ICLR 2017
null
cs.CL
20160524
20170330
[ { "id": "1512.05742" }, { "id": "1508.03386" }, { "id": "1605.05414" }, { "id": "1508.03391" }, { "id": "1508.01745" }, { "id": "1502.05698" }, { "id": "1503.02364" }, { "id": "1506.08909" }, { "id": "1603.08023" }, { "id": "1506.05869" } ]
1605.07427
24
# 5.2 Model Let Vq be the vocabulary of all words in the natural language questions. Let Wq be a |Vq| ∗ m matrix where each row is some m dimensional embedding for a word in the question vocabulary. This matrix is initialized with random values and learned during training. Given any question, we represent it with a bag-of-words representation by summing the vector representation of each word in the question. Let q = {wi}p P h(q) = > W, [wil i=l Then, to find the relevant fact from the memory M, we call the K-MIPS-based reader module with h(q) as the query. This uses Equation 3 and 4 to compute the output of the reader Rout. The reader is trained by minimizing the Negative Log Likelihood (NLL) of the correct fact. N Jo = > —log(Rout| fil) i=l 6 where fi is the index of the correct fact in Wm. We are fixing the memory embeddings to the TransE [23] embeddings and learning only the question embeddings. This model is simpler than the one reported in [4] so that it is esay to analyze the effect of various memory reading strategies. # 5.3 Training Details
1605.07427#24
Hierarchical Memory Networks
Memory networks are neural networks with an explicit memory component that can be both read and written to by the network. The memory is often addressed in a soft way using a softmax function, making end-to-end training with backpropagation possible. However, this is not computationally scalable for applications which require the network to read from extremely large memories. On the other hand, it is well known that hard attention mechanisms based on reinforcement learning are challenging to train successfully. In this paper, we explore a form of hierarchical memory network, which can be considered as a hybrid between hard and soft attention memory networks. The memory is organized in a hierarchical structure such that reading from it is done with less computation than soft attention over a flat memory, while also being easier to train than hard attention over a flat memory. Specifically, we propose to incorporate Maximum Inner Product Search (MIPS) in the training and inference procedures for our hierarchical memory network. We explore the use of various state-of-the art approximate MIPS techniques and report results on SimpleQuestions, a challenging large scale factoid question answering task.
http://arxiv.org/pdf/1605.07427
Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, Yoshua Bengio
stat.ML, cs.CL, cs.LG, cs.NE
10 pages
null
stat.ML
20160524
20160524
[ { "id": "1507.05910" }, { "id": "1502.05698" }, { "id": "1503.08895" }, { "id": "1506.02075" } ]
1605.07683
24
We collected around 4k chats to create this extra dataset, denoted Concierge. All conversations have been anonymized by (1) removing all user identifiers, (2) using the Stanford NER tagger to remove named entities (locations, timestamps, etc.), (3) running some manually defined regex to filter out any remaining salient information (phone numbers, etc.). The dataset does not contain results from API calls, but still records when operators made use of an external service (Yelp or OpenTable) to gather information. Hence, these have to be predicted, but without any argument (unlike in Task 2).
1605.07683#24
Learning End-to-End Goal-Oriented Dialog
Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End-to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
http://arxiv.org/pdf/1605.07683
Antoine Bordes, Y-Lan Boureau, Jason Weston
cs.CL
Accepted as a conference paper at ICLR 2017
null
cs.CL
20160524
20170330
[ { "id": "1512.05742" }, { "id": "1508.03386" }, { "id": "1605.05414" }, { "id": "1508.03391" }, { "id": "1508.01745" }, { "id": "1502.05698" }, { "id": "1503.02364" }, { "id": "1506.08909" }, { "id": "1603.08023" }, { "id": "1506.05869" } ]
1605.07427
25
This model is simpler than the one reported in [4] so that it is esay to analyze the effect of various memory reading strategies. # 5.3 Training Details We trained the model with the Adam optimizer [24], with a fixed learning rate of 0.001. We used mini-batches of size 128. We used 200 dimensional embeddings for the TransE entities, yielding 600 dimensional embeddings for facts by concatenating the embeddings of the subject, relation and object. We also experimented with summing the entities in the triple instead of concatenating, but we found that it was difficult for the model to differentiate facts this way. The only learnable parameters by the HMN model are the question word embeddings. The entity distribution in SimpleQuestions is extremely sparse and hence, following [4], we also add artificial questions for all the facts for which we do not have natural language questions. Unlike [4], we do not add any other additional tasks like paraphrase detection to the model, mainly to study the effect of the reader. We stopped training for all the models when the validation accuracy consistently decreased for 3 epochs. # 5.4 Exact K-MIPS improves accuracy
1605.07427#25
Hierarchical Memory Networks
Memory networks are neural networks with an explicit memory component that can be both read and written to by the network. The memory is often addressed in a soft way using a softmax function, making end-to-end training with backpropagation possible. However, this is not computationally scalable for applications which require the network to read from extremely large memories. On the other hand, it is well known that hard attention mechanisms based on reinforcement learning are challenging to train successfully. In this paper, we explore a form of hierarchical memory network, which can be considered as a hybrid between hard and soft attention memory networks. The memory is organized in a hierarchical structure such that reading from it is done with less computation than soft attention over a flat memory, while also being easier to train than hard attention over a flat memory. Specifically, we propose to incorporate Maximum Inner Product Search (MIPS) in the training and inference procedures for our hierarchical memory network. We explore the use of various state-of-the art approximate MIPS techniques and report results on SimpleQuestions, a challenging large scale factoid question answering task.
http://arxiv.org/pdf/1605.07427
Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, Yoshua Bengio
stat.ML, cs.CL, cs.LG, cs.NE
10 pages
null
stat.ML
20160524
20160524
[ { "id": "1507.05910" }, { "id": "1502.05698" }, { "id": "1503.08895" }, { "id": "1506.02075" } ]
1605.07683
25
The statistics of Concierge are given in Table 1. The dialogs are shorter than in Tasks 1-6, especially since they do not include results of API calls, but the vocabulary is more diverse and so is the candidate set; the candidate set is made of all utterances of the operator appearing in the training, validation and test sets. Beyond the higher variability of the language used by human operators compared to bots, the dataset offers additional challenges. The set of user requests is much wider, ranging from managing restaurant reservations to asking for recommendations or specific information. Users do not always stay focused on the request. API calls are not always used (e.g., the operator might use neither Yelp nor OpenTable to find a restaurant), and facts about restaurants are not structured nor constrained as in a KB. The structure of dialogs is thus much more variable. Users and operators also make typos, spelling and grammar mistakes. 1 Lowe et al. (2016) termed this setting Next-Utterance-Classification. 5 Published as a conference paper at ICLR 2017 # 4 MODELS To demonstrate how to use the dataset and provide baselines, we evaluate several learning methods on our goal-oriented dialog tasks: rule-based systems, classical information retrieval methods, supervised embeddings, and end-to-end Memory networks. 4.1 RULE-BASED SYSTEMS
1605.07683#25
Learning End-to-End Goal-Oriented Dialog
Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End-to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
http://arxiv.org/pdf/1605.07683
Antoine Bordes, Y-Lan Boureau, Jason Weston
cs.CL
Accepted as a conference paper at ICLR 2017
null
cs.CL
20160524
20170330
[ { "id": "1512.05742" }, { "id": "1508.03386" }, { "id": "1605.05414" }, { "id": "1508.03391" }, { "id": "1508.01745" }, { "id": "1502.05698" }, { "id": "1503.02364" }, { "id": "1506.08909" }, { "id": "1603.08023" }, { "id": "1506.05869" } ]
1605.07427
26
# 5.4 Exact K-MIPS improves accuracy In this section, we compare the performance of the full soft attention reader and exact K-MIPS attention readers. Our goal is to verify that K-MIPS attention is in fact a valid and useful attention mechanism and see how it fares when compared to full soft attention. For K-MIPS attention, we tried K ∈ 10, 50, 100, 1000. We would like to emphasize that, at training time, along with K candidates for a particular question, we also add the K-candidates for each question in the mini-batch. So the exact size of the softmax layer would be higer than K during training. In Table 1, we report the test performance of memory networks using the soft attention reader and K-MIPS attention reader. We also report the average softmax size during training. From the table, it is clear that the K-MIPS attention readers improve the performance of the network compared to soft attention reader. In fact, smaller the value of K is, better the performance. This result suggests that it is better to use a K-MIPS layer instead of softmax layer whenever possible. It is interesting to see that the convergence of the model is not slowed down due to this change in softmax computation (as shown in Figure 1).
1605.07427#26
Hierarchical Memory Networks
Memory networks are neural networks with an explicit memory component that can be both read and written to by the network. The memory is often addressed in a soft way using a softmax function, making end-to-end training with backpropagation possible. However, this is not computationally scalable for applications which require the network to read from extremely large memories. On the other hand, it is well known that hard attention mechanisms based on reinforcement learning are challenging to train successfully. In this paper, we explore a form of hierarchical memory network, which can be considered as a hybrid between hard and soft attention memory networks. The memory is organized in a hierarchical structure such that reading from it is done with less computation than soft attention over a flat memory, while also being easier to train than hard attention over a flat memory. Specifically, we propose to incorporate Maximum Inner Product Search (MIPS) in the training and inference procedures for our hierarchical memory network. We explore the use of various state-of-the art approximate MIPS techniques and report results on SimpleQuestions, a challenging large scale factoid question answering task.
http://arxiv.org/pdf/1605.07427
Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, Yoshua Bengio
stat.ML, cs.CL, cs.LG, cs.NE
10 pages
null
stat.ML
20160524
20160524
[ { "id": "1507.05910" }, { "id": "1502.05698" }, { "id": "1503.08895" }, { "id": "1506.02075" } ]
1605.07683
26
4.1 RULE-BASED SYSTEMS Our tasks T1-T5 are built with a simulator so as to be completely predictable. Thus it is possible to hand-code a rule based system that achieves 100% on them, similar to the bAbI tasks of Weston et al. (2015b). Indeed, the point of these tasks is not to check whether a human is smart enough to be able to build a rule-based system to solve them, but to help analyze in which circumstances machine learning algorithms are smart enough to work, and where they fail.
1605.07683#26
Learning End-to-End Goal-Oriented Dialog
Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End-to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
http://arxiv.org/pdf/1605.07683
Antoine Bordes, Y-Lan Boureau, Jason Weston
cs.CL
Accepted as a conference paper at ICLR 2017
null
cs.CL
20160524
20170330
[ { "id": "1512.05742" }, { "id": "1508.03386" }, { "id": "1605.05414" }, { "id": "1508.03391" }, { "id": "1508.01745" }, { "id": "1502.05698" }, { "id": "1503.02364" }, { "id": "1506.08909" }, { "id": "1603.08023" }, { "id": "1506.05869" } ]
1605.07427
27
Model Full-softmax 10-MIPS 50-MIPS 100-MIPS 1000-MIPS Clustering PCA-Tree WTA-Hash Test Acc. Avg. Softmax Size 59.5 62.2 61.2 60.6 59.6 51.5 32.4 40.2 108442 1290 6180 11928 70941 20006 21108 20008 Table 1: Accuracy in SQ test-set and average size of memory used. 10-softmax has high performance while using only smaller amount of memory. Figure 1: Validation curve for various models. Convergence is not slowed down by k-softmax. This experiment confirms the usefulness of K-MIPS attention. However, exact K-MIPS has the same complexity as a full softmax. Hence, to scale up the training, we need more efficient forms of K-MIPS attention, which is the focus of next experiment. # 5.5 Approximate K-MIPS based learning
1605.07427#27
Hierarchical Memory Networks
Memory networks are neural networks with an explicit memory component that can be both read and written to by the network. The memory is often addressed in a soft way using a softmax function, making end-to-end training with backpropagation possible. However, this is not computationally scalable for applications which require the network to read from extremely large memories. On the other hand, it is well known that hard attention mechanisms based on reinforcement learning are challenging to train successfully. In this paper, we explore a form of hierarchical memory network, which can be considered as a hybrid between hard and soft attention memory networks. The memory is organized in a hierarchical structure such that reading from it is done with less computation than soft attention over a flat memory, while also being easier to train than hard attention over a flat memory. Specifically, we propose to incorporate Maximum Inner Product Search (MIPS) in the training and inference procedures for our hierarchical memory network. We explore the use of various state-of-the art approximate MIPS techniques and report results on SimpleQuestions, a challenging large scale factoid question answering task.
http://arxiv.org/pdf/1605.07427
Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, Yoshua Bengio
stat.ML, cs.CL, cs.LG, cs.NE
10 pages
null
stat.ML
20160524
20160524
[ { "id": "1507.05910" }, { "id": "1502.05698" }, { "id": "1503.08895" }, { "id": "1506.02075" } ]
1605.07683
27
However, the Dialog State Tracking Challenge task (T6) contains some real interactions with users. This makes rule-based systems less straightforward and not so accurate (which is where we expect machine learning to be useful). We implemented a rule-based system for this task in the following way. We initialized a dialog state using the 3 relevant slots for this task: cuisine type, location and price range. Then we analyzed the training data and wrote a series of rules that fire for triggers like word matches, positions in the dialog, entity detections or dialog state, to output particular responses, API calls and/or update a dialog state. Responses are created by combining patterns extracted from the training set with entities detected in the previous turns or stored in the dialog state. Overall we built 28 rules and extracted 21 patterns. We optimized the choice of rules and their application priority (when needed) using the validation set, reaching a validation per-response accuracy of 40.7%. We did not build a rule-based system for Concierge data as it is even less constrained. 4.2 CLASSICAL INFORMATION RETRIEVAL MODELS Classical information retrieval (IR) models with no machine learning are standard baselines that often perform surprisingly well on dialog tasks (Isbell et al., 2000; Jafarpour et al., 2010; Ritter et al., 2011; Sordoni et al., 2015). We tried two standard variants:
1605.07683#27
Learning End-to-End Goal-Oriented Dialog
Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End-to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
http://arxiv.org/pdf/1605.07683
Antoine Bordes, Y-Lan Boureau, Jason Weston
cs.CL
Accepted as a conference paper at ICLR 2017
null
cs.CL
20160524
20170330
[ { "id": "1512.05742" }, { "id": "1508.03386" }, { "id": "1605.05414" }, { "id": "1508.03391" }, { "id": "1508.01745" }, { "id": "1502.05698" }, { "id": "1503.02364" }, { "id": "1506.08909" }, { "id": "1603.08023" }, { "id": "1506.05869" } ]
1605.07427
28
# 5.5 Approximate K-MIPS based learning As mentioned previously, designing faster algorithms for K-MIPS is an active area of research. [11] compared several state-of-the-art data-dependent and data-independent methods for faster approximate K-MIPS and it was found that clustering-based MIPS performs significantly better than other approaches. However the focus of the comparison was on performance during the inference 7 stage. In HMNs, K-MIPS must be used at both training stage and inference stages. To verify if the same trend can been seen during learning stage as well, we compared three different approaches: Clustering: This was explained in detail in section 3. WTA-Hash: Winner Takes All hashing [25] is a hashing-based K-MIPS algorithm which also converts MIPS to MCSS by augmenting additional dimensions to the vectors. This method used n hash functions and each hash function does p different random permutations of the vector. Then the prefix constituted by the first k elements of each permuted vector is used to construct the hash for the vector.
1605.07427#28
Hierarchical Memory Networks
Memory networks are neural networks with an explicit memory component that can be both read and written to by the network. The memory is often addressed in a soft way using a softmax function, making end-to-end training with backpropagation possible. However, this is not computationally scalable for applications which require the network to read from extremely large memories. On the other hand, it is well known that hard attention mechanisms based on reinforcement learning are challenging to train successfully. In this paper, we explore a form of hierarchical memory network, which can be considered as a hybrid between hard and soft attention memory networks. The memory is organized in a hierarchical structure such that reading from it is done with less computation than soft attention over a flat memory, while also being easier to train than hard attention over a flat memory. Specifically, we propose to incorporate Maximum Inner Product Search (MIPS) in the training and inference procedures for our hierarchical memory network. We explore the use of various state-of-the art approximate MIPS techniques and report results on SimpleQuestions, a challenging large scale factoid question answering task.
http://arxiv.org/pdf/1605.07427
Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, Yoshua Bengio
stat.ML, cs.CL, cs.LG, cs.NE
10 pages
null
stat.ML
20160524
20160524
[ { "id": "1507.05910" }, { "id": "1502.05698" }, { "id": "1503.08895" }, { "id": "1506.02075" } ]
1605.07683
28
TF-IDF Match For each possible candidate response, we compute a matching score between the input and the response, and rank the responses by score. The score is the TF–IDF weighted cosine similarity between the bag-of-words of the input and bag-of-words of the candidate response. We consider the case of the input being either only the last utterance or the entire conversation history, and choose the variant that works best on the validation set (typically the latter). Nearest Neighbor Using the input, we find the most similar conversation in the training set, and output the response from that example. In this case we consider the input to only be the last utterance, and consider the training set as (utterance, response) pairs that we select from. We use word overlap as the scoring method. When several responses are associated with the same utterance in training, we sort them by decreasing co-occurence frequency. 4.3 SUPERVISED EMBEDDING MODELS
1605.07683#28
Learning End-to-End Goal-Oriented Dialog
Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End-to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
http://arxiv.org/pdf/1605.07683
Antoine Bordes, Y-Lan Boureau, Jason Weston
cs.CL
Accepted as a conference paper at ICLR 2017
null
cs.CL
20160524
20170330
[ { "id": "1512.05742" }, { "id": "1508.03386" }, { "id": "1605.05414" }, { "id": "1508.03391" }, { "id": "1508.01745" }, { "id": "1502.05698" }, { "id": "1503.02364" }, { "id": "1506.08909" }, { "id": "1603.08023" }, { "id": "1506.05869" } ]
1605.07427
29
PCA-Tree: PCA-Tree [7] is the state-of-the-art tree-based method, which converts MIPS to NNS by vector augmentation. It uses the principal components of the data to construct a balanced binary tree with data residing in the leaves. For a fair comparison, we varied the hyper-parameters of each algorithm in such a way that the average speedup is approximately the same. Table 1 shows the performance of all three methods, compared to a full softmax. From the table, it is clear that the clustering-based method performs significantly better than the other two methods. However, performances are lower when compared to the performance of the full softmax. As a next experiment, we analyze various the strategies proposed in Section 3.1 to reduce the approximation bias of clustering-based K-MIPS: Top-K: This strategy picks the vectors in the top K clusters as candidates. Sample-K: This strategy samples K clusters, without replacement, based on a probability distribution based on the dot product of the query with the cluster centroids. When combined with the Top-K strategy, we ignore clusters selected by the Top-k strategy for sampling. Rand-block: This strategy divides the memory into several blocks and uniformly samples a random block as candidate.
1605.07427#29
Hierarchical Memory Networks
Memory networks are neural networks with an explicit memory component that can be both read and written to by the network. The memory is often addressed in a soft way using a softmax function, making end-to-end training with backpropagation possible. However, this is not computationally scalable for applications which require the network to read from extremely large memories. On the other hand, it is well known that hard attention mechanisms based on reinforcement learning are challenging to train successfully. In this paper, we explore a form of hierarchical memory network, which can be considered as a hybrid between hard and soft attention memory networks. The memory is organized in a hierarchical structure such that reading from it is done with less computation than soft attention over a flat memory, while also being easier to train than hard attention over a flat memory. Specifically, we propose to incorporate Maximum Inner Product Search (MIPS) in the training and inference procedures for our hierarchical memory network. We explore the use of various state-of-the art approximate MIPS techniques and report results on SimpleQuestions, a challenging large scale factoid question answering task.
http://arxiv.org/pdf/1605.07427
Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, Yoshua Bengio
stat.ML, cs.CL, cs.LG, cs.NE
10 pages
null
stat.ML
20160524
20160524
[ { "id": "1507.05910" }, { "id": "1502.05698" }, { "id": "1503.08895" }, { "id": "1506.02075" } ]
1605.07683
29
4.3 SUPERVISED EMBEDDING MODELS A standard, often strong, baseline is to use supervised word embedding models for scoring (conversa- tion history, response) pairs. The embedding vectors are trained directly for this goal. In contrast, word embeddings are most well-known in the context of unsupervised training on raw text as in word2vec (Mikolov et al.|/2013). Such models are trained by learning to predict the middle word given the surrounding window of words, or vice-versa. However, given training data consisting of dialogs, a much more direct and strongly performing training procedure can be used: predict the next response given the previous conversation. In this setting a candidate reponse y is scored against the input x: f(x,y) = (Ax)! By, where A and B are d x V word embedding matrices, i.e. input and response are treated as summed bags-of-embeddings. We also consider the case of enforcing A = B, which sometimes works better, and optimize the choice on the validation set.
1605.07683#29
Learning End-to-End Goal-Oriented Dialog
Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End-to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
http://arxiv.org/pdf/1605.07683
Antoine Bordes, Y-Lan Boureau, Jason Weston
cs.CL
Accepted as a conference paper at ICLR 2017
null
cs.CL
20160524
20170330
[ { "id": "1512.05742" }, { "id": "1508.03386" }, { "id": "1605.05414" }, { "id": "1508.03391" }, { "id": "1508.01745" }, { "id": "1502.05698" }, { "id": "1503.02364" }, { "id": "1506.08909" }, { "id": "1603.08023" }, { "id": "1506.05869" } ]
1605.07427
30
Rand-block: This strategy divides the memory into several blocks and uniformly samples a random block as candidate. We experimented with 1000 clusters and 2000 clusters. While comparing various training strategies, we made sure that the effective speedup is approximately the same. Memory access to facts per query for all the models is approximately 20,000, hence yielding a 5X speedup. Top-K Sample-K rand-block Yes No Yes Yes Yes No Yes Yes No Yes No No No Yes Yes 1000 clusters Test Acc. 50.2 52.5 52.8 51.8 52.5 epochs 16 68 31 32 38 2000 clusters Test Acc. 51.5 52.8 53.1 52.3 52.7 epochs 22 63 26 26 19 Table 2: Accuracy in SQ test set and number of epochs for convergence. Results are given in Table 2. We observe that the best approach is to combine the Top-K and Sample-K strategies, with Rand-block not being beneficial. Interestingly, the worst performances correspond to cases where the Sample-K strategy is ignored. # 6 Conclusion
1605.07427#30
Hierarchical Memory Networks
Memory networks are neural networks with an explicit memory component that can be both read and written to by the network. The memory is often addressed in a soft way using a softmax function, making end-to-end training with backpropagation possible. However, this is not computationally scalable for applications which require the network to read from extremely large memories. On the other hand, it is well known that hard attention mechanisms based on reinforcement learning are challenging to train successfully. In this paper, we explore a form of hierarchical memory network, which can be considered as a hybrid between hard and soft attention memory networks. The memory is organized in a hierarchical structure such that reading from it is done with less computation than soft attention over a flat memory, while also being easier to train than hard attention over a flat memory. Specifically, we propose to incorporate Maximum Inner Product Search (MIPS) in the training and inference procedures for our hierarchical memory network. We explore the use of various state-of-the art approximate MIPS techniques and report results on SimpleQuestions, a challenging large scale factoid question answering task.
http://arxiv.org/pdf/1605.07427
Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, Yoshua Bengio
stat.ML, cs.CL, cs.LG, cs.NE
10 pages
null
stat.ML
20160524
20160524
[ { "id": "1507.05910" }, { "id": "1502.05698" }, { "id": "1503.08895" }, { "id": "1506.02075" } ]
1605.07683
30
The embeddings are trained with a margin ranking loss: f (x, y) > m + f (x, ¯y), with m the size of the margin, and we sample N negative candidate responses ¯y per example, and train with SGD. This approach has been previously shown to be very effective in a range of contexts (Bai et al., 2009; 6 Published as a conference paper at ICLR 2017 Dodge et al., 2016). This method can be thought of as a classical information retrieval model, but where the matching function is learnt. 4.4 MEMORY NETWORKS Memory Networks (Weston et al., 2015a; Sukhbaatar et al., 2015) are a recent class of models that have been applied to a range of natural language processing tasks, including question answering (Weston et al., 2015b), language modeling (Sukhbaatar et al., 2015), and non-goal-oriented dialog (Dodge et al., 2016). By first writing and then iteratively reading from a memory component (using hops) that can store historical dialogs and short-term context to reason about the required response, they have been shown to perform well on those tasks and to outperform some other end-to-end architectures based on Recurrent Neural Networks. Hence, we chose them as end-to-end model baseline.
1605.07683#30
Learning End-to-End Goal-Oriented Dialog
Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End-to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
http://arxiv.org/pdf/1605.07683
Antoine Bordes, Y-Lan Boureau, Jason Weston
cs.CL
Accepted as a conference paper at ICLR 2017
null
cs.CL
20160524
20170330
[ { "id": "1512.05742" }, { "id": "1508.03386" }, { "id": "1605.05414" }, { "id": "1508.03391" }, { "id": "1508.01745" }, { "id": "1502.05698" }, { "id": "1503.02364" }, { "id": "1506.08909" }, { "id": "1603.08023" }, { "id": "1506.05869" } ]
1605.07427
31
# 6 Conclusion In this paper, we proposed a hierarchical memory network that exploits K-MIPS for its attention- based reader. Unlike soft attention readers, K-MIPS attention reader is easily scalable to larger memories. This is achieved by organizing the memory in a hierarchical way. Experiments on the SimpleQuestions dataset demonstrate that exact K-MIPS attention is better than soft attention. However, existing state-of-the-art approximate K-MIPS techniques provide a speedup at the cost of some accuracy. Future research will investigate designing efficient dynamic K-MIPS algorithms, where the memory can be dynamically updated during training. This should reduce the approximation bias and hence improve the overall performance. 8 # References [1] Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014. [2] Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. In Proceedings Of The International Conference on Representation Learning (ICLR 2015), 2015. In Press. [3] Ankit Kumar et al. Ask me anything: Dynamic memory networks for natural language processing. CoRR, abs/1506.07285, 2015.
1605.07427#31
Hierarchical Memory Networks
Memory networks are neural networks with an explicit memory component that can be both read and written to by the network. The memory is often addressed in a soft way using a softmax function, making end-to-end training with backpropagation possible. However, this is not computationally scalable for applications which require the network to read from extremely large memories. On the other hand, it is well known that hard attention mechanisms based on reinforcement learning are challenging to train successfully. In this paper, we explore a form of hierarchical memory network, which can be considered as a hybrid between hard and soft attention memory networks. The memory is organized in a hierarchical structure such that reading from it is done with less computation than soft attention over a flat memory, while also being easier to train than hard attention over a flat memory. Specifically, we propose to incorporate Maximum Inner Product Search (MIPS) in the training and inference procedures for our hierarchical memory network. We explore the use of various state-of-the art approximate MIPS techniques and report results on SimpleQuestions, a challenging large scale factoid question answering task.
http://arxiv.org/pdf/1605.07427
Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, Yoshua Bengio
stat.ML, cs.CL, cs.LG, cs.NE
10 pages
null
stat.ML
20160524
20160524
[ { "id": "1507.05910" }, { "id": "1502.05698" }, { "id": "1503.08895" }, { "id": "1506.02075" } ]
1605.07683
31
We use the MemN2N architecture of Sukhbaatar et al. (2015), with an additional modification to leverage exact matches and types, described shortly. Apart from that addition, the main components of the model are (i) how it stores the conversation in memory, (ii) how it reads from the memory to reason about the response; and (iii) how it outputs the response. The details are given in Appendix A. 4.5 MATCH TYPE FEATURES TO DEAL WITH ENTITIES Words denoting entities have two important traits: 1) exact matches are usually more appropriate to deal with them than approximate matches, and 2) they frequently appear as OOV words (e.g., the name of a new restaurant). Both are a challenge for embedding-based methods. Firstly, embedding into a low dimensional space makes it hard to differentiate between exact word matches, and matches between words with similar meaning (Bai et al., 2009). While this can be a virtue (e.g. when using synonyms), it is often a flaw when dealing with entities (e.g. failure to differentiate between phone numbers since they have similar embeddings). Secondly, when a new word is used (e.g. the name of a new restaurant) not seen before in training, no word embedding is available, typically resulting in failure (Weston et al., 2015a).
1605.07683#31
Learning End-to-End Goal-Oriented Dialog
Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End-to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
http://arxiv.org/pdf/1605.07683
Antoine Bordes, Y-Lan Boureau, Jason Weston
cs.CL
Accepted as a conference paper at ICLR 2017
null
cs.CL
20160524
20170330
[ { "id": "1512.05742" }, { "id": "1508.03386" }, { "id": "1605.05414" }, { "id": "1508.03391" }, { "id": "1508.01745" }, { "id": "1502.05698" }, { "id": "1503.02364" }, { "id": "1506.08909" }, { "id": "1603.08023" }, { "id": "1506.05869" } ]
1605.07427
32
[3] Ankit Kumar et al. Ask me anything: Dynamic memory networks for natural language processing. CoRR, abs/1506.07285, 2015. [4] Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. Large-scale simple question answering with memory networks. arXiv preprint arXiv:1506.02075, 2015. [5] Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8:229–256, 1992. [6] Parikshit Ram and Alexander G. Gray. Maximum inner-product search using cone trees. KDD ’12, pages 931–939, 2012. [7] Yoram Bachrach et al. Speeding up the xbox recommender system using a euclidean transformation for inner-product spaces. RecSys ’14, pages 257–264, 2014. [8] Anshumali Shrivastava and Ping Li. Asymmetric LSH (ALSH) for sublinear time maximum inner product search (MIPS). In Advances in Neural Information Processing Systems 27, pages 2321–2329, 2014.
1605.07427#32
Hierarchical Memory Networks
Memory networks are neural networks with an explicit memory component that can be both read and written to by the network. The memory is often addressed in a soft way using a softmax function, making end-to-end training with backpropagation possible. However, this is not computationally scalable for applications which require the network to read from extremely large memories. On the other hand, it is well known that hard attention mechanisms based on reinforcement learning are challenging to train successfully. In this paper, we explore a form of hierarchical memory network, which can be considered as a hybrid between hard and soft attention memory networks. The memory is organized in a hierarchical structure such that reading from it is done with less computation than soft attention over a flat memory, while also being easier to train than hard attention over a flat memory. Specifically, we propose to incorporate Maximum Inner Product Search (MIPS) in the training and inference procedures for our hierarchical memory network. We explore the use of various state-of-the art approximate MIPS techniques and report results on SimpleQuestions, a challenging large scale factoid question answering task.
http://arxiv.org/pdf/1605.07427
Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, Yoshua Bengio
stat.ML, cs.CL, cs.LG, cs.NE
10 pages
null
stat.ML
20160524
20160524
[ { "id": "1507.05910" }, { "id": "1502.05698" }, { "id": "1503.08895" }, { "id": "1506.02075" } ]
1605.07683
32
Both problems can be alleviated with match type features. Specifically, we augment the vocabulary with 7 special words, one for each of the KB entity types (cuisine type, location, price range, party size, rating, phone number and address). For each type, the corresponding type word is added to the candidate representation if a word is found that appears 1) as a KB entity of that type, 2) in the candidate, and 3) in the input or memory. Any word that matches as a KB entity can be typed even if it has never been seen before in training dialogs. These features allow the model to learn to rely on type information using exact matching words cues when OOV entity embeddings are not known, as long as it has access to a KB with the OOV entities. We assess the impact of such features for TF-IDF Match, Supervised Embeddings and Memory Networks. # 5 EXPERIMENTS
1605.07683#32
Learning End-to-End Goal-Oriented Dialog
Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End-to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
http://arxiv.org/pdf/1605.07683
Antoine Bordes, Y-Lan Boureau, Jason Weston
cs.CL
Accepted as a conference paper at ICLR 2017
null
cs.CL
20160524
20170330
[ { "id": "1512.05742" }, { "id": "1508.03386" }, { "id": "1605.05414" }, { "id": "1508.03391" }, { "id": "1508.01745" }, { "id": "1502.05698" }, { "id": "1503.02364" }, { "id": "1506.08909" }, { "id": "1603.08023" }, { "id": "1506.05869" } ]
1605.07427
33
[9] Anshumali Shrivastava and Ping Li. Improved asymmetric locality sensitive hashing (alsh) for maximum inner product search (mips). In Proceedings of Conference on Uncertainty in Artificial Intelligence (UAI), 2015. [10] Behnam Neyshabur and Nathan Srebro. On symmetric and asymmetric lshs for inner product search. In Proceedings of the 31st International Conference on Machine Learning, 2015. [11] Alex Auvolat, Sarath Chandar, Pascal Vincent, Hugo Larochelle, and Yoshua Bengio. Clustering is efficient for approximate maximum inner product search. arXiv preprint arXiv:1507.05910, 2015. [12] Shi Zhong. Efficient online spherical k-means clustering. In Neural Networks, 2005. IJCNN’05. Proceed- ings. 2005 IEEE International Joint Conference on, volume 5, pages 3180–3185. IEEE, 2005. [13] Jason Weston, Antoine Bordes, Sumit Chopra, and Tomas Mikolov. Towards ai-complete question answering: a set of prerequisite toy tasks. arXiv preprint arXiv:1502.05698, 2015.
1605.07427#33
Hierarchical Memory Networks
Memory networks are neural networks with an explicit memory component that can be both read and written to by the network. The memory is often addressed in a soft way using a softmax function, making end-to-end training with backpropagation possible. However, this is not computationally scalable for applications which require the network to read from extremely large memories. On the other hand, it is well known that hard attention mechanisms based on reinforcement learning are challenging to train successfully. In this paper, we explore a form of hierarchical memory network, which can be considered as a hybrid between hard and soft attention memory networks. The memory is organized in a hierarchical structure such that reading from it is done with less computation than soft attention over a flat memory, while also being easier to train than hard attention over a flat memory. Specifically, we propose to incorporate Maximum Inner Product Search (MIPS) in the training and inference procedures for our hierarchical memory network. We explore the use of various state-of-the art approximate MIPS techniques and report results on SimpleQuestions, a challenging large scale factoid question answering task.
http://arxiv.org/pdf/1605.07427
Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, Yoshua Bengio
stat.ML, cs.CL, cs.LG, cs.NE
10 pages
null
stat.ML
20160524
20160524
[ { "id": "1507.05910" }, { "id": "1502.05698" }, { "id": "1503.08895" }, { "id": "1506.02075" } ]
1605.07683
33
# 5 EXPERIMENTS Our main results across all the models and tasks are given in Table 2 (extra results are also given in Table 10 of Appendix D). The first 5 rows show tasks T1-T5, and rows 6-10 show the same tasks in the out-of-vocabulary setting. Rows 11 and 12 give results for the Dialog State Tracking Challenge task (T6) and Concierge respectively. Columns 2-7 give the results of each method tried in terms of per-response accuracy and per-dialog accuracy, the latter given in parenthesis. Per-response accuracy counts the percentage of responses that are correct (i.e., the correct candidate is chosen out of all possible candidates). Per-dialog accuracy counts the percentage of dialogs where every response is correct. Ultimately, if only one response is incorrect this could result in a failed dialog, i.e. failure to achieve the goal (in this case, of achieving a restaurant booking). Note that we test Memory Networks (MemNNs) with and without match type features, the results are shown in the last two columns. The hyperparameters for all models were optimized on the validation sets; values for best performing models are given in Appendix C.
1605.07683#33
Learning End-to-End Goal-Oriented Dialog
Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End-to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
http://arxiv.org/pdf/1605.07683
Antoine Bordes, Y-Lan Boureau, Jason Weston
cs.CL
Accepted as a conference paper at ICLR 2017
null
cs.CL
20160524
20170330
[ { "id": "1512.05742" }, { "id": "1508.03386" }, { "id": "1605.05414" }, { "id": "1508.03391" }, { "id": "1508.01745" }, { "id": "1502.05698" }, { "id": "1503.02364" }, { "id": "1506.08909" }, { "id": "1603.08023" }, { "id": "1506.05869" } ]
1605.07427
34
[14] Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. End-to-end memory networks. arXiv preprint arXiv:1503.08895, 2015. [15] Jesse Dodge, Andreea Gane, Xiang Zhang, Antoine Bordes, Sumit Chopra, Alexander Miller, Arthur Szlam, and Jason Weston. Evaluating prerequisite qualities for learning end-to-end dialog systems. CoRR, abs/1511.06931, 2015. [16] Caiming Xiong, Stephen Merity, and Richard Socher. Dynamic memory networks for visual and textual question answering. CoRR, abs/1603.01417, 2016. [17] Wojciech Zaremba and Ilya Sutskever. Reinforcement learning neural turing machines. CoRR, abs/1505.00521, 2015. [18] Frederic Morin and Yoshua Bengio. Hierarchical probabilistic neural network language model. In Robert G. Cowell and Zoubin Ghahramani, editors, Proceedings of AISTATS, pages 246–252, 2005. [19] Andriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. arXiv preprint arXiv:1402.0030, 2014.
1605.07427#34
Hierarchical Memory Networks
Memory networks are neural networks with an explicit memory component that can be both read and written to by the network. The memory is often addressed in a soft way using a softmax function, making end-to-end training with backpropagation possible. However, this is not computationally scalable for applications which require the network to read from extremely large memories. On the other hand, it is well known that hard attention mechanisms based on reinforcement learning are challenging to train successfully. In this paper, we explore a form of hierarchical memory network, which can be considered as a hybrid between hard and soft attention memory networks. The memory is organized in a hierarchical structure such that reading from it is done with less computation than soft attention over a flat memory, while also being easier to train than hard attention over a flat memory. Specifically, we propose to incorporate Maximum Inner Product Search (MIPS) in the training and inference procedures for our hierarchical memory network. We explore the use of various state-of-the art approximate MIPS techniques and report results on SimpleQuestions, a challenging large scale factoid question answering task.
http://arxiv.org/pdf/1605.07427
Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, Yoshua Bengio
stat.ML, cs.CL, cs.LG, cs.NE
10 pages
null
stat.ML
20160524
20160524
[ { "id": "1507.05910" }, { "id": "1502.05698" }, { "id": "1503.08895" }, { "id": "1506.02075" } ]
1605.07683
34
The classical IR method TF-IDF Match performs the worst of all methods, and much worse than the Nearest Neighbor IR method, which is true on both the simulated tasks T1-T5 and on the real data of T6 and Concierge. Supplementing TF-IDF Match with match type features noticeably improves performance, which however still remains far behind Nearest Neighbor IR (adding bigrams to the 7 Published as a conference paper at ICLR 2017 Table 2: Test results across all tasks and methods. For tasks T1-T5 results are given in the standard setup and the out-of-vocabulary (OOV) setup, where words (e.g. restaurant names) may not have been seen during training. Task T6 is the Dialog state tracking 2 task with real dialogs, and only has one setup. Best performing methods (or methods within 0.1% of best performing) are given in bold for the per-response accuracy metric, with the per-dialog accuracy given in parenthesis. (∗) For Concierge, an example is considered correctly answered if the correct response is ranked among the top 10 candidates by the bot, to accommodate the much larger range of semantically equivalent responses among candidates (see ex. in Tab. 7) . (†) We did not implement MemNNs+match type on Concierge, because this method requires a KB and there is none associated with it.
1605.07683#34
Learning End-to-End Goal-Oriented Dialog
Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End-to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
http://arxiv.org/pdf/1605.07683
Antoine Bordes, Y-Lan Boureau, Jason Weston
cs.CL
Accepted as a conference paper at ICLR 2017
null
cs.CL
20160524
20170330
[ { "id": "1512.05742" }, { "id": "1508.03386" }, { "id": "1605.05414" }, { "id": "1508.03391" }, { "id": "1508.01745" }, { "id": "1502.05698" }, { "id": "1503.02364" }, { "id": "1506.08909" }, { "id": "1603.08023" }, { "id": "1506.05869" } ]
1605.07427
35
[20] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. In International Conference on Learning Representations, Workshop Track, 2013. [21] Sébastien Jean, KyungHyun Cho, Roland Memisevic, and Yoshua Bengio. On using very large target vocabulary for neural machine translation. In Proceedings of ACL,2015, pages 1–10, 2015. [22] Ryan Spring and Anshumali Shrivastava. Scalable and sustainable deep learning via randomized hashing. CoRR, abs/1602.08194, 2016. 9 [23] Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. Translat- ing embeddings for modeling multi-relational data. In Advances in NIPS, pages 2787–2795. 2013. [24] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014. [25] Sudheendra Vijayanarasimhan, Jon Shlens, Rajat Monga, and Jay Yagnik. Deep networks with large output spaces. arXiv preprint arXiv:1412.7479, 2014. 10
1605.07427#35
Hierarchical Memory Networks
Memory networks are neural networks with an explicit memory component that can be both read and written to by the network. The memory is often addressed in a soft way using a softmax function, making end-to-end training with backpropagation possible. However, this is not computationally scalable for applications which require the network to read from extremely large memories. On the other hand, it is well known that hard attention mechanisms based on reinforcement learning are challenging to train successfully. In this paper, we explore a form of hierarchical memory network, which can be considered as a hybrid between hard and soft attention memory networks. The memory is organized in a hierarchical structure such that reading from it is done with less computation than soft attention over a flat memory, while also being easier to train than hard attention over a flat memory. Specifically, we propose to incorporate Maximum Inner Product Search (MIPS) in the training and inference procedures for our hierarchical memory network. We explore the use of various state-of-the art approximate MIPS techniques and report results on SimpleQuestions, a challenging large scale factoid question answering task.
http://arxiv.org/pdf/1605.07427
Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, Yoshua Bengio
stat.ML, cs.CL, cs.LG, cs.NE
10 pages
null
stat.ML
20160524
20160524
[ { "id": "1507.05910" }, { "id": "1502.05698" }, { "id": "1503.08895" }, { "id": "1506.02075" } ]
1605.07683
35
Task T1: Issuing API calls T2: Updating API calls T3: Displaying options T4: Providing information T5: Full dialogs T1(OOV): Issuing API calls T2(OOV): Updating API calls T3(OOV): Displaying options T4(OOV): Providing inform. T5(OOV): Full dialogs T6: Dialog state tracking 2 Concierge(∗) Rule-based Systems 100 (100) 100 (100) 100 (100) 100 (100) 100 (100) 100 (100) 100 (100) 100 (100) 100 (100) 100 (100) 33.3 (0) n/a TF-IDF Match no type 5.6 (0) 3.4 (0) 8.0 (0) 9.5 (0) 4.6 (0) 5.8 (0) 3.5 (0) 8.3 (0) 9.8 (0) 4.6 (0) 1.6 (0) 1.1 (0.2) + type 22.4 (0) 16.4 (0) 8.0 (0) 17.8 (0) 8.1 (0) 22.4 (0) 16.8 (0) 8.3 (0) 17.2 (0) 9.0 (0)
1605.07683#35
Learning End-to-End Goal-Oriented Dialog
Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End-to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
http://arxiv.org/pdf/1605.07683
Antoine Bordes, Y-Lan Boureau, Jason Weston
cs.CL
Accepted as a conference paper at ICLR 2017
null
cs.CL
20160524
20170330
[ { "id": "1512.05742" }, { "id": "1508.03386" }, { "id": "1605.05414" }, { "id": "1508.03391" }, { "id": "1508.01745" }, { "id": "1502.05698" }, { "id": "1503.02364" }, { "id": "1506.08909" }, { "id": "1603.08023" }, { "id": "1506.05869" } ]
1605.07683
36
(0) 8.1 (0) 22.4 (0) 16.8 (0) 8.3 (0) 17.2 (0) 9.0 (0) 1.6 (0) n/a Nearest Neighbor 55.1 (0) 68.3 (0) 58.8 (0) 28.6 (0) 57.1 (0) 44.1 (0) 68.3 (0) 58.8 (0) 28.6 (0) 48.4 (0) 21.9 (0) 13.4 (0.5) Supervised Embeddings 100 (100) (0) 68.4 (0) 64.9 (0) 57.2 (0) 75.4 (0) 60.0 (0) 68.3 (0) 65.0 (0) 57.0 (0) 58.2 22.6 (0) 14.6 (0.5) Memory Networks no match type 99.9 (99.6) 100 (100) 74.9 (2.0) (3.0) 59.5 96.1 (49.4) 72.3 78.9 74.4 57.6 65.5 41.1 16.7 (0) (0) (0) (0) (0) (0) (1.2) + match type 100 (100) 98.3
1605.07683#36
Learning End-to-End Goal-Oriented Dialog
Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End-to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
http://arxiv.org/pdf/1605.07683
Antoine Bordes, Y-Lan Boureau, Jason Weston
cs.CL
Accepted as a conference paper at ICLR 2017
null
cs.CL
20160524
20170330
[ { "id": "1512.05742" }, { "id": "1508.03386" }, { "id": "1605.05414" }, { "id": "1508.03391" }, { "id": "1508.01745" }, { "id": "1502.05698" }, { "id": "1503.02364" }, { "id": "1506.08909" }, { "id": "1603.08023" }, { "id": "1506.05869" } ]
1605.07683
38
dictionary has no effect on performance). This is in sharp contrast to other recent results on data- driven non-goal directed conversations, e.g. over dialogs on Twitter (Ritter et al., 2011) or Reddit (Dodge et al., 2016), where it was found that TF-IDF Match outperforms Nearest Neighbor, as general conversations on a given subject typically share many words. We conjecture that the goal-oriented nature of the conversation means that the conversation moves forward more quickly, sharing fewer words per (input, response) pair, e.g. consider the example in Figure 1. Supervised embeddings outperform classical IR methods in general, indicating that learning mappings between words (via word embeddings) is important. However, only one task (T1, Issuing API calls) is completely successful. In the other tasks, some responses are correct, as shown by the per-response accuracy, however there is no dialog where the goal is actually achieved (i.e., the mean dialog- accuracy is 0). Typically the model can provide correct responses for greeting messages, asking to wait, making API calls and asking if there are any other options necessary. However, it fails to interpret the results of API calls to display options, provide information or update the calls with new information, resulting in most of its errors, even when match type features are provided.
1605.07683#38
Learning End-to-End Goal-Oriented Dialog
Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End-to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
http://arxiv.org/pdf/1605.07683
Antoine Bordes, Y-Lan Boureau, Jason Weston
cs.CL
Accepted as a conference paper at ICLR 2017
null
cs.CL
20160524
20170330
[ { "id": "1512.05742" }, { "id": "1508.03386" }, { "id": "1605.05414" }, { "id": "1508.03391" }, { "id": "1508.01745" }, { "id": "1502.05698" }, { "id": "1503.02364" }, { "id": "1506.08909" }, { "id": "1603.08023" }, { "id": "1506.05869" } ]
1605.07683
39
Memory Networks (without match type features) outperform classical IR and supervised embeddings across all of the tasks. They can solve the first two tasks (issuing and updating API calls) adequately. On the other tasks, they give improved results, but do not solve them. While the per-response accuracy is improved, the per-dialog accuracy is still close to 0 on T3 and T4. Some examples of predictions of the MemNN for T1-4 are given in Appendix B. On the OOV tasks again performance is improved, but this is all due to better performance on known words, as unknown words are simply not used without the match type features. As stated in Appendix C, optimal hyperparameters on several of the tasks involve 3 or 4 hops, indicating that iterative accessing and reasoning over the conversation helps, e.g. on T3 using 1 hop gives 64.8% while 2 hops yields 74.7%. Appendix B displays illustrative examples of Memory Networks predictions on T 1-4 and Concierge.
1605.07683#39
Learning End-to-End Goal-Oriented Dialog
Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End-to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
http://arxiv.org/pdf/1605.07683
Antoine Bordes, Y-Lan Boureau, Jason Weston
cs.CL
Accepted as a conference paper at ICLR 2017
null
cs.CL
20160524
20170330
[ { "id": "1512.05742" }, { "id": "1508.03386" }, { "id": "1605.05414" }, { "id": "1508.03391" }, { "id": "1508.01745" }, { "id": "1502.05698" }, { "id": "1503.02364" }, { "id": "1506.08909" }, { "id": "1603.08023" }, { "id": "1506.05869" } ]
1605.07683
40
Memory Networks with match type features give two performance gains over the same models without match type features: (i) T4 (providing information) becomes solvable because matches can be made to the results of the API call; and (ii) out-of-vocabulary results are significantly improved as well. Still, tasks T3 and T5 are still fail cases, performance drops slightly on T2 compared to not using match type features, and no relative improvement is observed on T6. Finally, note that matching words on its own is not enough, as evidenced by the poor performance of TF-IDF matching; this idea must be combined with types and the other properties of the MemNN model. Unsurprisingly, perfectly coded rule-based systems can solve the simulated tasks T1-T5 perfectly, whereas our machine learning methods cannot. However, it is not easy to build an effective rule-based 8 Published as a conference paper at ICLR 2017 system when dealing with real language on real problems, and our rule based system is outperformed by MemNNs on the more realistic task T6.
1605.07683#40
Learning End-to-End Goal-Oriented Dialog
Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End-to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
http://arxiv.org/pdf/1605.07683
Antoine Bordes, Y-Lan Boureau, Jason Weston
cs.CL
Accepted as a conference paper at ICLR 2017
null
cs.CL
20160524
20170330
[ { "id": "1512.05742" }, { "id": "1508.03386" }, { "id": "1605.05414" }, { "id": "1508.03391" }, { "id": "1508.01745" }, { "id": "1502.05698" }, { "id": "1503.02364" }, { "id": "1506.08909" }, { "id": "1603.08023" }, { "id": "1506.05869" } ]
1605.07683
41
Published as a conference paper at ICLR 2017 system when dealing with real language on real problems, and our rule based system is outperformed by MemNNs on the more realistic task T6. Overall, while the methods we tried made some inroads into these tasks, there are still many challenges left unsolved. Our best models can learn to track implicit dialog states and manipulate OOV words and symbols (T1-T2) to issue API calls and progress in conversations, but they are still unable to perfectly handle interpreting knowledge about entities (from returned API calls) to present results to the user, e.g. displaying options in T3. The improvement observed on the simulated tasks e.g. where MemNNs outperform supervised embeddings which in turn outperform IR methods, is also seen on the realistic data of T6 with similar relative gains. This is encouraging as it indicates that future work on breaking down, analysing and developing models over the simulated tasks should help in the real tasks as well. Results on Concierge confirm this observation: the pattern of relative performances of methods is the same on Concierge and on our series of tasks. This suggests that our synthetic data can indeed be used as an effective evaluation proxy. # 6 CONCLUSION
1605.07683#41
Learning End-to-End Goal-Oriented Dialog
Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End-to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
http://arxiv.org/pdf/1605.07683
Antoine Bordes, Y-Lan Boureau, Jason Weston
cs.CL
Accepted as a conference paper at ICLR 2017
null
cs.CL
20160524
20170330
[ { "id": "1512.05742" }, { "id": "1508.03386" }, { "id": "1605.05414" }, { "id": "1508.03391" }, { "id": "1508.01745" }, { "id": "1502.05698" }, { "id": "1503.02364" }, { "id": "1506.08909" }, { "id": "1603.08023" }, { "id": "1506.05869" } ]
1605.07683
42
# 6 CONCLUSION We have introduced an open dataset and task set for evaluating end-to-end goal-oriented dialog learning methods in a systematic and controlled way. We hope this will help foster progress of end-to- end conversational agents because (i) existing measures of performance either prevent reproducibility (different Mechanical Turk jobs) or do not correlate well with human judgements (Liu et al., 2016); (ii) the breakdown in tasks will help focus research and development to improve the learning methods; and (iii) goal-oriented dialog has clear utility in real applications. We illustrated how to use the testbed using a variant of end-to-end Memory Networks, which prove an effective model on these tasks relative to other baselines, but are still lacking in some key areas. ACKNOWLEDGMENTS The authors would like to thank Martin Raison, Alex Lebrun and Laurent Landowski for their help with the Concierge data. # REFERENCES Bai, B., Weston, J., Grangier, D., Collobert, R., Sadamasa, K., Qi, Y., Chapelle, O., and Weinberger, K. (2009). Supervised semantic indexing. In Proceedings of ACM CIKM, pages 187–196. ACM.
1605.07683#42
Learning End-to-End Goal-Oriented Dialog
Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End-to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
http://arxiv.org/pdf/1605.07683
Antoine Bordes, Y-Lan Boureau, Jason Weston
cs.CL
Accepted as a conference paper at ICLR 2017
null
cs.CL
20160524
20170330
[ { "id": "1512.05742" }, { "id": "1508.03386" }, { "id": "1605.05414" }, { "id": "1508.03391" }, { "id": "1508.01745" }, { "id": "1502.05698" }, { "id": "1503.02364" }, { "id": "1506.08909" }, { "id": "1603.08023" }, { "id": "1506.05869" } ]
1605.07683
43
Banchs, R. E. (2012). Movie-dic: a movie dialogue corpus for research and development. In Proceedings of the 50th Annual Meeting of the ACL. Chen, Y.-N., Hakkani-Tür, D., Tur, G., Gao, J., and Deng, L. (2016). End-to-end memory networks with knowledge carryover for multi-turn spoken language understanding. In Proceedings of Interspeech. Dahl, D. A., Bates, M., Brown, M., Fisher, W., Hunicke-Smith, K., Pallett, D., Pao, C., Rudnicky, A., and Shriberg, E. (1994). Expanding the scope of the atis task: The atis-3 corpus. In Proceedings of the workshop on Human Language Technology, pages 43–48. Association for Computational Linguistics. Dodge, J., Gane, A., Zhang, X., Bordes, A., Chopra, S., Miller, A., Szlam, A., and Weston, J. (2016). Evaluating prerequisite qualities for learning end-to-end dialog systems. In Proc. of ICLR.
1605.07683#43
Learning End-to-End Goal-Oriented Dialog
Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End-to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
http://arxiv.org/pdf/1605.07683
Antoine Bordes, Y-Lan Boureau, Jason Weston
cs.CL
Accepted as a conference paper at ICLR 2017
null
cs.CL
20160524
20170330
[ { "id": "1512.05742" }, { "id": "1508.03386" }, { "id": "1605.05414" }, { "id": "1508.03391" }, { "id": "1508.01745" }, { "id": "1502.05698" }, { "id": "1503.02364" }, { "id": "1506.08909" }, { "id": "1603.08023" }, { "id": "1506.05869" } ]
1605.07683
44
Gašic, M., Kim, D., Tsiakoulis, P., Breslin, C., Henderson, M., Szummer, M., Thomson, B., and Young, S. (2014). Incremental on-line adaptation of pomdp-based dialogue managers to extended domains. In Proceedings on InterSpeech. Henderson, M., Thomson, B., and Williams, J. (2014a). The second dialog state tracking challenge. In 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue, page 263. Henderson, M., Thomson, B., and Young, S. (2014b). Word-based dialog state tracking with recurrent neural networks. In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pages 292–299. Hixon, B., Clark, P., and Hajishirzi, H. (2015). Learning knowledge graphs for question answering through conversational dialog. In Proceedings of the the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Denver, Colorado, USA. 9 Published as a conference paper at ICLR 2017
1605.07683#44
Learning End-to-End Goal-Oriented Dialog
Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End-to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
http://arxiv.org/pdf/1605.07683
Antoine Bordes, Y-Lan Boureau, Jason Weston
cs.CL
Accepted as a conference paper at ICLR 2017
null
cs.CL
20160524
20170330
[ { "id": "1512.05742" }, { "id": "1508.03386" }, { "id": "1605.05414" }, { "id": "1508.03391" }, { "id": "1508.01745" }, { "id": "1502.05698" }, { "id": "1503.02364" }, { "id": "1506.08909" }, { "id": "1603.08023" }, { "id": "1506.05869" } ]
1605.07683
45
9 Published as a conference paper at ICLR 2017 Isbell, C. L., Kearns, M., Kormann, D., Singh, S., and Stone, P. (2000). Cobot in lambdamoo: A social statistics agent. In AAAI/IAAI, pages 36–41. Jafarpour, S., Burges, C. J., and Ritter, A. (2010). Filter, rank, and transfer the knowledge: Learning to chat. Advances in Ranking, 10. Kim, S., D’Haro, L. F., Banchs, R. E., Williams, J. D., and Henderson, M. (2016). The fourth dialog state tracking challenge. In Proceedings of the 7th International Workshop on Spoken Dialogue Systems (IWSDS). Lemon, O., Georgila, K., Henderson, J., and Stuttle, M. (2006). An isu dialogue system exhibiting reinforcement In Proceedings of the 11th learning of dialogue policies: generic slot-filling in the talk in-car system. Conference of the European Chapter of the ACL: Posters & Demonstrations, pages 119–122.
1605.07683#45
Learning End-to-End Goal-Oriented Dialog
Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End-to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
http://arxiv.org/pdf/1605.07683
Antoine Bordes, Y-Lan Boureau, Jason Weston
cs.CL
Accepted as a conference paper at ICLR 2017
null
cs.CL
20160524
20170330
[ { "id": "1512.05742" }, { "id": "1508.03386" }, { "id": "1605.05414" }, { "id": "1508.03391" }, { "id": "1508.01745" }, { "id": "1502.05698" }, { "id": "1503.02364" }, { "id": "1506.08909" }, { "id": "1603.08023" }, { "id": "1506.05869" } ]
1605.07683
46
Liu, C.-W., Lowe, R., Serban, I. V., Noseworthy, M., Charlin, L., and Pineau, J. (2016). How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. arXiv preprint arXiv:1603.08023. Lowe, R., Pow, N., Serban, I., and Pineau, J. (2015). The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. arXiv preprint arXiv:1506.08909. Lowe, R., Serban, I. V., Noseworthy, M., Charlin, L., and Pineau, J. (2016). On the evaluation of dialogue systems with next utterance classification. arXiv preprint arXiv:1605.05414. Mikolov, T., Chen, K., Corrado, G., and Dean, J. (2013). Efficient estimation of word representations in vector space. arXiv:1301.3781. Pietquin, O. and Hastie, H. (2013). A survey on metrics for the evaluation of user simulations. The knowledge engineering review, 28(01), 59–73.
1605.07683#46
Learning End-to-End Goal-Oriented Dialog
Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End-to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
http://arxiv.org/pdf/1605.07683
Antoine Bordes, Y-Lan Boureau, Jason Weston
cs.CL
Accepted as a conference paper at ICLR 2017
null
cs.CL
20160524
20170330
[ { "id": "1512.05742" }, { "id": "1508.03386" }, { "id": "1605.05414" }, { "id": "1508.03391" }, { "id": "1508.01745" }, { "id": "1502.05698" }, { "id": "1503.02364" }, { "id": "1506.08909" }, { "id": "1603.08023" }, { "id": "1506.05869" } ]
1605.07683
47
Pietquin, O. and Hastie, H. (2013). A survey on metrics for the evaluation of user simulations. The knowledge engineering review, 28(01), 59–73. Ritter, A., Cherry, C., and Dolan, W. B. (2011). Data-driven response generation in social media. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Serban, I. V., Sordoni, A., Bengio, Y., Courville, A., and Pineau, J. (2015a). Building end-to-end dialogue systems using generative hierarchical neural network models. In Proc. of the AAAI Conference on Artificial Intelligence. Serban, I. V., Lowe, R., Charlin, L., and Pineau, J. (2015b). A survey of available corpora for building data-driven dialogue systems. arXiv preprint arXiv:1512.05742. Shang, L., Lu, Z., and Li, H. (2015). Neural responding machine for short-text conversation. arXiv preprint arXiv:1503.02364.
1605.07683#47
Learning End-to-End Goal-Oriented Dialog
Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End-to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
http://arxiv.org/pdf/1605.07683
Antoine Bordes, Y-Lan Boureau, Jason Weston
cs.CL
Accepted as a conference paper at ICLR 2017
null
cs.CL
20160524
20170330
[ { "id": "1512.05742" }, { "id": "1508.03386" }, { "id": "1605.05414" }, { "id": "1508.03391" }, { "id": "1508.01745" }, { "id": "1502.05698" }, { "id": "1503.02364" }, { "id": "1506.08909" }, { "id": "1603.08023" }, { "id": "1506.05869" } ]
1605.07683
48
Sordoni, A., Galley, M., Auli, M., Brockett, C., Ji, Y., Mitchell, M., Nie, J.-Y., Gao, J., and Dolan, B. (2015). A neural network approach to context-sensitive generation of conversational responses. Proceedings of NAACL. Su, P.-H., Vandyke, D., Gasic, M., Kim, D., Mrksic, N., Wen, T.-H., and Young, S. (2015a). Learning from real users: Rating dialogue success with neural networks for reinforcement learning in spoken dialogue systems. arXiv preprint arXiv:1508.03386. Su, P.-H., Vandyke, D., Gasic, M., Mrksic, N., Wen, T.-H., and Young, S. (2015b). Reward shaping with recurrent neural networks for speeding up on-line policy learning in spoken dialogue systems. arXiv preprint arXiv:1508.03391. Sukhbaatar, S., Szlam, A., Weston, J., and Fergus, R. (2015). End-to-end memory networks. Proceedings of NIPS. Vinyals, O. and Le, Q. (2015). A neural conversational model. arXiv preprint arXiv:1506.05869.
1605.07683#48
Learning End-to-End Goal-Oriented Dialog
Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End-to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
http://arxiv.org/pdf/1605.07683
Antoine Bordes, Y-Lan Boureau, Jason Weston
cs.CL
Accepted as a conference paper at ICLR 2017
null
cs.CL
20160524
20170330
[ { "id": "1512.05742" }, { "id": "1508.03386" }, { "id": "1605.05414" }, { "id": "1508.03391" }, { "id": "1508.01745" }, { "id": "1502.05698" }, { "id": "1503.02364" }, { "id": "1506.08909" }, { "id": "1603.08023" }, { "id": "1506.05869" } ]
1605.07683
49
Vinyals, O. and Le, Q. (2015). A neural conversational model. arXiv preprint arXiv:1506.05869. Wang, H., Lu, Z., Li, H., and Chen, E. (2013). A dataset for research on short-text conversations. In EMNLP. Wang, Z. and Lemon, O. (2013). A simple and generic belief tracking mechanism for the dialog state tracking challenge: On the believability of observed information. In Proceedings of the SIGDIAL 2013 Conference. Wen, T.-H., Gasic, M., Mrksic, N., Su, P.-H., Vandyke, D., and Young, S. (2015). Semantically conditioned lstm-based natural language generation for spoken dialogue systems. arXiv preprint arXiv:1508.01745. Weston, J., Chopra, S., and Bordes, A. (2015a). Memory networks. Proceedings of ICLR. Weston, J., Bordes, A., Chopra, S., and Mikolov, T. (2015b). Towards ai-complete question answering: a set of prerequisite toy tasks. arXiv preprint arXiv:1502.05698.
1605.07683#49
Learning End-to-End Goal-Oriented Dialog
Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End-to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
http://arxiv.org/pdf/1605.07683
Antoine Bordes, Y-Lan Boureau, Jason Weston
cs.CL
Accepted as a conference paper at ICLR 2017
null
cs.CL
20160524
20170330
[ { "id": "1512.05742" }, { "id": "1508.03386" }, { "id": "1605.05414" }, { "id": "1508.03391" }, { "id": "1508.01745" }, { "id": "1502.05698" }, { "id": "1503.02364" }, { "id": "1506.08909" }, { "id": "1603.08023" }, { "id": "1506.05869" } ]
1605.07683
50
Young, S., Gasic, M., Thomson, B., and Williams, J. D. (2013). Pomdp-based statistical spoken dialog systems: A review. Proceedings of the IEEE, 101(5), 1160–1179. 10 Published as a conference paper at ICLR 2017 # A MEMORY NETWORKS IMPLEMENTATION Storing and representing the conversation history As the model conducts a conversation with the user, at each time step t the previous utterance (from the user) and response (from the model) are appended to the memory. Hence, at any given time there are cu t−1 model responses stored (i.e. the entire conversation).2 The aim at time t is to thus choose the next response cr t . We train on existing full dialog transcripts, so at training time we know the upcoming utterance cr t and can use it as a training target. Following Dodge et al. (2016), we represent each utterance as a bag-of-words and in memory it is represented as a vector using the embedding matrix A, i.e. the memory is an array with entries: m = (AΦ(cu 1 ), AΦ(cr 1) . . . , AΦ(cu t−1), AΦ(cr t−1))
1605.07683#50
Learning End-to-End Goal-Oriented Dialog
Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End-to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
http://arxiv.org/pdf/1605.07683
Antoine Bordes, Y-Lan Boureau, Jason Weston
cs.CL
Accepted as a conference paper at ICLR 2017
null
cs.CL
20160524
20170330
[ { "id": "1512.05742" }, { "id": "1508.03386" }, { "id": "1605.05414" }, { "id": "1508.03391" }, { "id": "1508.01745" }, { "id": "1502.05698" }, { "id": "1503.02364" }, { "id": "1506.08909" }, { "id": "1603.08023" }, { "id": "1506.05869" } ]
1605.07683
51
m = (AΦ(cu 1 ), AΦ(cr 1) . . . , AΦ(cu t−1), AΦ(cr t−1)) where Φ(·) maps the utterance to a bag of dimension V (the vocabulary), and A is a d × V matrix, where d is the embedding dimension. We retain the last user utterance cu t as the “input” to be used directly in the controller. The contents of each memory slot mi so far does not contain any information of which speaker spoke an utterance, and at what time during the conversation. We therefore encode both of those pieces of information in the mapping Φ by extending the vocabulary to contain T = 1000 extra “time features” which encode the index i into the bag-of-words, and two more features that encode whether the utterance was spoken by the user or the model.
1605.07683#51
Learning End-to-End Goal-Oriented Dialog
Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End-to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
http://arxiv.org/pdf/1605.07683
Antoine Bordes, Y-Lan Boureau, Jason Weston
cs.CL
Accepted as a conference paper at ICLR 2017
null
cs.CL
20160524
20170330
[ { "id": "1512.05742" }, { "id": "1508.03386" }, { "id": "1605.05414" }, { "id": "1508.03391" }, { "id": "1508.01745" }, { "id": "1502.05698" }, { "id": "1503.02364" }, { "id": "1506.08909" }, { "id": "1603.08023" }, { "id": "1506.05869" } ]
1605.07683
52
Attention over the memory The last user utterance cj is embedded using the same matrix A giving q = A®(c}'), which can also be seen as the initial state of the controller. At this point the controller reads from the memory to find salient parts of the previous conversation that are relevant to producing a response. The match between g and the memories is computed by taking the inner product followed by a softmax: pi= Softmax(u' mi), giving a probability vector over the memories. The vector that is returned back to the controller is then computed by 0 = R 5°, pim; where R is ad x d square matrix. The controller state is then updated with g2 = o + q. The memory can be iteratively reread to look for additional pertinent information using the updated state of the controller gz instead of g, and in general using q;, on iteration h, with a fixed number of iterations V (termed N hops). Empirically we find improved performance on our tasks with up to 3 or 4 hops. Choosing the response The final prediction is then defined as: a@ = Softmax(qn41'W®(y1),-..,¢n41| W®(yc))
1605.07683#52
Learning End-to-End Goal-Oriented Dialog
Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End-to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
http://arxiv.org/pdf/1605.07683
Antoine Bordes, Y-Lan Boureau, Jason Weston
cs.CL
Accepted as a conference paper at ICLR 2017
null
cs.CL
20160524
20170330
[ { "id": "1512.05742" }, { "id": "1508.03386" }, { "id": "1605.05414" }, { "id": "1508.03391" }, { "id": "1508.01745" }, { "id": "1502.05698" }, { "id": "1503.02364" }, { "id": "1506.08909" }, { "id": "1603.08023" }, { "id": "1506.05869" } ]
1605.07683
53
a@ = Softmax(qn41'W®(y1),-..,¢n41| W®(yc)) where there are C candidate responses in y, and W is of dimension d × V . In our tasks the set y is a (large) set of candidate responses which includes all possible bot utterances and API calls. The entire model is trained using stochastic gradient descent (SGD), minimizing a standard cross-entropy loss between ˆa and the true label a. # B EXAMPLES OF PREDICTIONS OF A MEMORY NETWORK Tables 3, 4, 5 and 6 display examples of predictions of the best performing Memory Network on full dialogs, Task 5, (with 3 hops) on test examples of Tasks 1-4 along with the values of the attention over each memory for each hop (pi as defined in Sec. A). This model does not use match type features. Then, Table 7 displays an example of prediction of the best performing Memory Network on Concierge (with 2 hops) on a test example along with the values of the attention over each memory for each hop. # C HYPERPARAMETERS Tables 8 and 9 respectively display the values of the hyperparameters of the best Supervised Embeddings and Memory Networks selected for each task. These models were selected using the best validation validation sets. # D ADDITIONAL RESULTS
1605.07683#53
Learning End-to-End Goal-Oriented Dialog
Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End-to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
http://arxiv.org/pdf/1605.07683
Antoine Bordes, Y-Lan Boureau, Jason Weston
cs.CL
Accepted as a conference paper at ICLR 2017
null
cs.CL
20160524
20170330
[ { "id": "1512.05742" }, { "id": "1508.03386" }, { "id": "1605.05414" }, { "id": "1508.03391" }, { "id": "1508.01745" }, { "id": "1502.05698" }, { "id": "1503.02364" }, { "id": "1506.08909" }, { "id": "1603.08023" }, { "id": "1506.05869" } ]
1605.07683
54
# D ADDITIONAL RESULTS Table 10 provides results for additional variants of supervised embeddings, using either a dictionary that includes all bigrams to leverage some word order information, or match type features. On some tasks, supervised embeddings perform better when the last user utterance is used as sole input, without the full dialog history (see Table 8). When no history is used, we slightly adapt match type features to only record type: a special word corresponding to type T (e.g., phone, address, etc) is appended to the representation of a candidate if the 2API calls are stored as bot utterances cr i , and KB facts resulting from such calls as user utterances cu i . ? API calls are stored as bot utterances c’, and KB facts resulting from such calls as user utterances c'’. 11 Published as a conference paper at ICLR 2017 Table 3: Task 1 (Issue API call) The model learns to direct its attention towards the 4 memories containing the information key to issue the API call. More hops help to strengthen this signal. <silence> is a special token used to indicate that the user did not speak at this turn – the model has to carry out the conversation with no additional input.
1605.07683#54
Learning End-to-End Goal-Oriented Dialog
Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End-to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
http://arxiv.org/pdf/1605.07683
Antoine Bordes, Y-Lan Boureau, Jason Weston
cs.CL
Accepted as a conference paper at ICLR 2017
null
cs.CL
20160524
20170330
[ { "id": "1512.05742" }, { "id": "1508.03386" }, { "id": "1605.05414" }, { "id": "1508.03391" }, { "id": "1508.01745" }, { "id": "1502.05698" }, { "id": "1503.02364" }, { "id": "1506.08909" }, { "id": "1603.08023" }, { "id": "1506.05869" } ]
1605.07683
55
Locutor Time User 1 Bot 2 User 3 Bot 4 User 5 Bot 6 User 7 Bot 8 User 9 Bot 10 User 11 12 Bot User input Correct answer Predicted answer Dialog History hi hello what can i help you with today may i have a table in paris i’m on it <silence> any preference on a type of cuisine i love indian food how many people would be in your party we will be six which price range are looking for in a moderate price range please ok let me look into some options for you <silence> api_call indian paris six moderate api_call indian paris six moderate Hop #1 Hop #2 Hop #3 .016 .024 .137 .028 .099 .090 .188 .022 .144 .028 .213 .011 .054 .040 .099 .048 .181 .056 .159 .051 .129 .039 .128 .016 .020 .008 .141 .004 .037 .014 .238 .010 .114 .006 .405 .003 [Correct] Table 4: Task 2 (Update API call) Out of the multiple memories from the current dialog, the model correctly focuses on the 2 important pieces: the original API call and the utterance giving the update.
1605.07683#55
Learning End-to-End Goal-Oriented Dialog
Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End-to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
http://arxiv.org/pdf/1605.07683
Antoine Bordes, Y-Lan Boureau, Jason Weston
cs.CL
Accepted as a conference paper at ICLR 2017
null
cs.CL
20160524
20170330
[ { "id": "1512.05742" }, { "id": "1508.03386" }, { "id": "1605.05414" }, { "id": "1508.03391" }, { "id": "1508.01745" }, { "id": "1502.05698" }, { "id": "1503.02364" }, { "id": "1506.08909" }, { "id": "1603.08023" }, { "id": "1506.05869" } ]
1605.07683
56
Hop #1 Hop #2 Hop #3 .072 .012 .042 .023 .070 .006 .051 .043 .095 .042 .069 .113 .311 .007 .013 .006 .061 .026 .087 .026 .081 .025 .059 .038 .080 .025 .127 .062 .188 .016 .028 .011 .040 .001 .012 .001 .055 .001 .018 .004 .096 .003 .032 .043 .683 .001 .007 .000 12 Published as a conference paper at ICLR 2017 Table 5: Task 3 (Displaying options) The model knows it has to display options but the attention is wrong: it should attend on the ratings to select the best option (with highest rating). It cannot learn that properly and match type features do not help. It is correct here by luck, the task is not solved overall (see Tab. 2). We do not show all memories in the table, only those with meaningful attention.
1605.07683#56
Learning End-to-End Goal-Oriented Dialog
Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End-to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
http://arxiv.org/pdf/1605.07683
Antoine Bordes, Y-Lan Boureau, Jason Weston
cs.CL
Accepted as a conference paper at ICLR 2017
null
cs.CL
20160524
20170330
[ { "id": "1512.05742" }, { "id": "1508.03386" }, { "id": "1605.05414" }, { "id": "1508.03391" }, { "id": "1508.01745" }, { "id": "1502.05698" }, { "id": "1503.02364" }, { "id": "1506.08909" }, { "id": "1603.08023" }, { "id": "1506.05869" } ]
1605.07683
57
Time 14 15 20 21 23 24 25 26 27 30 31 32 33 37 38 39 40 User input Correct answer Predicted answer Locutor Bot User Bot User User User User User User User User User User User User User User Dialog history api_call indian paris six moderate instead could it be with french food api_call french paris six moderate resto_1 r_phone rest_1_phone resto_1 r_cuisine french resto_1 r_location paris resto_1 r_number six resto_1 r_price moderate resto_1 r_rating 6 resto_2 r_cuisine french resto_2 r_location paris resto_2 r_number six resto_2 r_price moderate resto_3 r_cuisine french resto_3 r_location paris resto_3 r_number six resto_3 r_price moderate <silence> what do you think of this option: resto_1 what do you think of this option: resto_1 Hop #1 Hop #2 Hop #3 .000 .103 .000 .004 .005 .292 .298 .090 .002 .007 .081 .012 .009 .001 .016 .022 .015 .012 .067 .012 .018 .029 .060 .050 .060 .016 .031 .040 .020 .029 .014
1605.07683#57
Learning End-to-End Goal-Oriented Dialog
Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End-to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
http://arxiv.org/pdf/1605.07683
Antoine Bordes, Y-Lan Boureau, Jason Weston
cs.CL
Accepted as a conference paper at ICLR 2017
null
cs.CL
20160524
20170330
[ { "id": "1512.05742" }, { "id": "1508.03386" }, { "id": "1605.05414" }, { "id": "1508.03391" }, { "id": "1508.01745" }, { "id": "1502.05698" }, { "id": "1503.02364" }, { "id": "1506.08909" }, { "id": "1603.08023" }, { "id": "1506.05869" } ]
1605.07683
59
Table 6: Task 4 (Providing extra-information) The model knows it must display a phone or an address, but, as explained in Section A the embeddings mix up the information and make it hard to distinguish between different phone numbers or addresses, making answering correctly very hard. As shown in the results of Tab. 2, this problem can be solved by adding match type features, that allow to emphasize entities actually appearing in the history. The attention is globally wrong here.
1605.07683#59
Learning End-to-End Goal-Oriented Dialog
Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End-to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
http://arxiv.org/pdf/1605.07683
Antoine Bordes, Y-Lan Boureau, Jason Weston
cs.CL
Accepted as a conference paper at ICLR 2017
null
cs.CL
20160524
20170330
[ { "id": "1512.05742" }, { "id": "1508.03386" }, { "id": "1605.05414" }, { "id": "1508.03391" }, { "id": "1508.01745" }, { "id": "1502.05698" }, { "id": "1503.02364" }, { "id": "1506.08909" }, { "id": "1603.08023" }, { "id": "1506.05869" } ]
1605.07683
60
Time Locutor 14 Bot 15 User 20 Bot 21 User 22 User 23 User 24 User 25 User 26 User 27 User 28 User 29 User 31 User 32 User 33 User 35 User 36 User 37 User 39 User 40 User 42 Bot 43 User Bot 44 User input Correct answer Predicted answer Dialog history api_call indian paris six moderate instead could it be with french food api_call french paris six moderate resto_1 r_phone resto_1_phone resto_1 r_address resto_1_address resto_1 r_cuisine french resto_1 r_location paris resto_1 r_number six resto_1 r_price moderate resto_1 r_rating 6 resto_2 r_phone resto_2_phone resto_2 r_address resto_2_address resto_2 r_location paris resto_2 r_number six resto_2 r_price moderate resto_3 r_phone resto_3_phone resto_3 r_address resto_3_address resto_3 r_location paris resto_3 r_number six resto_3 r_price moderate what do you think of this option: resto_1 let’s do it great let me do the reservation do you have its address here it is resto_1_address here it is:
1605.07683#60
Learning End-to-End Goal-Oriented Dialog
Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End-to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
http://arxiv.org/pdf/1605.07683
Antoine Bordes, Y-Lan Boureau, Jason Weston
cs.CL
Accepted as a conference paper at ICLR 2017
null
cs.CL
20160524
20170330
[ { "id": "1512.05742" }, { "id": "1508.03386" }, { "id": "1605.05414" }, { "id": "1508.03391" }, { "id": "1508.01745" }, { "id": "1502.05698" }, { "id": "1503.02364" }, { "id": "1506.08909" }, { "id": "1603.08023" }, { "id": "1506.05869" } ]
1605.07683
61
think of this option: resto_1 let’s do it great let me do the reservation do you have its address here it is resto_1_address here it is: resto_8_address Hop #1 Hop #2 Hop #3 .000 .011 .000 .005 .004 .003 .091 .078 .225 .006 .009 .004 .176 .126 .090 .001 .002 .028 .013 .008 .001 .004 .000 .006 .024 .005 .011 .018 .018 .068 .086 .070 .014 .015 .014 .075 .100 .038 .004 .005 .028 .039 .018 .074 .032 .003 .000 .007 .001 .004 .001 .001 .108 .020 .369 .008 .006 .001 .193 .026 .167 .001 .001 .026 .002 .013 .000 .001 .000 [Incorrect]
1605.07683#61
Learning End-to-End Goal-Oriented Dialog
Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End-to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
http://arxiv.org/pdf/1605.07683
Antoine Bordes, Y-Lan Boureau, Jason Weston
cs.CL
Accepted as a conference paper at ICLR 2017
null
cs.CL
20160524
20170330
[ { "id": "1512.05742" }, { "id": "1508.03386" }, { "id": "1605.05414" }, { "id": "1508.03391" }, { "id": "1508.01745" }, { "id": "1502.05698" }, { "id": "1503.02364" }, { "id": "1506.08909" }, { "id": "1603.08023" }, { "id": "1506.05869" } ]
1605.07683
62
13 Published as a conference paper at ICLR 2017 Table 7: Concierge Data The model is also able to learn from human-human dialogs. <person>, <org>, <number> and <date> are special tokens used to anonymize the data. We report the top 5 answers predicted by the model. They are all semantically equivalent. Note that the utterances, while all produced by humans, are not perfect English ("rservation", "I’ll check into it")
1605.07683#62
Learning End-to-End Goal-Oriented Dialog
Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End-to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
http://arxiv.org/pdf/1605.07683
Antoine Bordes, Y-Lan Boureau, Jason Weston
cs.CL
Accepted as a conference paper at ICLR 2017
null
cs.CL
20160524
20170330
[ { "id": "1512.05742" }, { "id": "1508.03386" }, { "id": "1605.05414" }, { "id": "1508.03391" }, { "id": "1508.01745" }, { "id": "1502.05698" }, { "id": "1503.02364" }, { "id": "1506.08909" }, { "id": "1603.08023" }, { "id": "1506.05869" } ]
1605.07683
63
Time 1 2 3 4 5 Locutor User User User User Bot User input Correct answer Pred. answer #1 Pred. answer #2 Pred. answer #3 Pred. answer #4 Pred. answer #5 Dialog History hey concierge could you check if i can get a rservation at <org> <date> for brunch <number> people <silence> hi <person> unfortunately <org> is fully booked for <date> and there’s <number> people on the waiting list when’s the earliest availability i’ll check i’m on it i’ll find out i’ll take a look i’ll check i’ll check into it Hop #1 Hop #2 .189 .209 .197 .187 .225 .095 .178 .142 .167 .410 [Incorrect] [Incorrect] [Incorrect] [Correct] [Incorrect] Table 8: Hyperparameters of Supervised Embeddings. When Use History is True, the whole conversation history is concatenated with the latest user utterance to create the input. If False, only the latest utterance is used as input.
1605.07683#63
Learning End-to-End Goal-Oriented Dialog
Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End-to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
http://arxiv.org/pdf/1605.07683
Antoine Bordes, Y-Lan Boureau, Jason Weston
cs.CL
Accepted as a conference paper at ICLR 2017
null
cs.CL
20160524
20170330
[ { "id": "1512.05742" }, { "id": "1508.03386" }, { "id": "1605.05414" }, { "id": "1508.03391" }, { "id": "1508.01745" }, { "id": "1502.05698" }, { "id": "1503.02364" }, { "id": "1506.08909" }, { "id": "1603.08023" }, { "id": "1506.05869" } ]
1605.07683
64
Task Task 1 Task 2 Task 3 Task 4 Task 5 Task 6 Concierge Learning Rate Margin m Embedding Dim d Negative Cand. N Use History 0.01 0.01 0.01 0.001 0.01 0.001 0.001 0.01 0.01 0.1 0.1 0.01 0.01 0.1 32 128 128 128 32 128 64 100 100 1000 1000 100 100 100 True False False False True False False Table 9: Hyperparameters of Memory Networks. The longer and more complex the dialogs are, the more hops are needed. Task Task 1 Task 2 Task 3 Task 4 Task 5 Task 6 Concierge Learning Rate Margin m Embedding Dim d Negative Cand. N Nb Hops 0.01 0.01 0.01 0.01 0.01 0.01 0.001 0.1 0.1 0.1 0.1 0.1 0.1 0.1 128 32 32 128 32 128 128 100 100 100 100 100 100 100 1 1 3 2 3 4 2
1605.07683#64
Learning End-to-End Goal-Oriented Dialog
Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End-to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
http://arxiv.org/pdf/1605.07683
Antoine Bordes, Y-Lan Boureau, Jason Weston
cs.CL
Accepted as a conference paper at ICLR 2017
null
cs.CL
20160524
20170330
[ { "id": "1512.05742" }, { "id": "1508.03386" }, { "id": "1605.05414" }, { "id": "1508.03391" }, { "id": "1508.01745" }, { "id": "1502.05698" }, { "id": "1503.02364" }, { "id": "1506.08909" }, { "id": "1603.08023" }, { "id": "1506.05869" } ]
1605.07683
65
candidate contains a word that appears in the knowledge base as an entity of type T , regardless of whether the same word appeared earlier in the conversation. As seen on Table 10, match type features improve performance on out-of-vocabulary tasks 1 and 5, bringing it closer to that of Memory Networks without match type features, but still quite lagging Memory Networks with match type features. Bigrams slightly hurt rather than help performance, except in Task 5 in the standard in-vocabulary setup (performance is lower in the OOV setup). 14 Published as a conference paper at ICLR 2017 Table 10: Test results across all tasks and methods. For tasks T1-T5 results are given in the standard setup and the out-of-vocabulary (OOV) setup, where words (e.g. restaurant names) may not have been seen during training. Task T6 is the Dialog state tracking 2 task with real dialogs, and only has one setup. Best performing methods (or methods within 0.1% of best performing) are given in bold for the per-response accuracy metric, with the per-dialog accuracy given in parenthesis.
1605.07683#65
Learning End-to-End Goal-Oriented Dialog
Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End-to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
http://arxiv.org/pdf/1605.07683
Antoine Bordes, Y-Lan Boureau, Jason Weston
cs.CL
Accepted as a conference paper at ICLR 2017
null
cs.CL
20160524
20170330
[ { "id": "1512.05742" }, { "id": "1508.03386" }, { "id": "1605.05414" }, { "id": "1508.03391" }, { "id": "1508.01745" }, { "id": "1502.05698" }, { "id": "1503.02364" }, { "id": "1506.08909" }, { "id": "1603.08023" }, { "id": "1506.05869" } ]
1605.07683
66
Task T1: Issuing API calls T2: Updating API calls T3: Displaying options T4: Providing information T5: Full dialogs T1(OOV): Issuing API calls T2(OOV): Updating API calls T3(OOV): Displaying options T4(OOV): Providing inform. T5(OOV): Full dialogs T6: Dialog state tracking 2 Supervised Embeddings + match type no bigram no match type no bigram (100) (0) (0) (0) (0) (0) (0) (0) (0) (0) (0) + bigrams no match type 98.6 (92.4) 68.3 64.9 57.3 83.4 58.8 68.3 62.1 57.0 50.4 21.8 100 68.4 64.9 57.2 75.4 60.0 68.3 65.0 57.0 58.2 22.6 83.2 68.4 64.9 57.2 76.2 67.2 68.3 65.0 57.1 64.4 22.1 (0) (0) (0) (0) (0) (0) (0) (0) (0) (0) (0) (0) (0) (0) (0) (0)
1605.07683#66
Learning End-to-End Goal-Oriented Dialog
Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End-to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
http://arxiv.org/pdf/1605.07683
Antoine Bordes, Y-Lan Boureau, Jason Weston
cs.CL
Accepted as a conference paper at ICLR 2017
null
cs.CL
20160524
20170330
[ { "id": "1512.05742" }, { "id": "1508.03386" }, { "id": "1605.05414" }, { "id": "1508.03391" }, { "id": "1508.01745" }, { "id": "1502.05698" }, { "id": "1503.02364" }, { "id": "1506.08909" }, { "id": "1603.08023" }, { "id": "1506.05869" } ]
1605.06431
0
6 1 0 2 t c O 7 2 ] V C . s c [ 2 v 1 3 4 6 0 . 5 0 6 1 : v i X r a # Residual Networks Behave Like Ensembles of Relatively Shallow Networks Michael Wilber Department of Computer Science & Cornell Tech Cornell University {av443, mjw285, sjb344}@cornell.edu # Abstract In this work we propose a novel interpretation of residual networks showing that they can be seen as a collection of many paths of differing length. Moreover, residual networks seem to enable very deep networks by leveraging only the short paths during training. To support this observation, we rewrite residual networks as an explicit collection of paths. Unlike traditional models, paths through residual networks vary in length. Further, a lesion study reveals that these paths show ensemble-like behavior in the sense that they do not strongly depend on each other. Finally, and most surprising, most paths are shorter than one might expect, and only the short paths are needed during training, as longer paths do not contribute any gradient. For example, most of the gradient in a residual network with 110 layers comes from paths that are only 10-34 layers deep. Our results reveal one of the key characteristics that seem to enable the training of very deep networks: Residual networks avoid the vanishing gradient problem by introducing short paths which can carry gradient throughout the extent of very deep networks. # Introduction
1605.06431#0
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
In this work we propose a novel interpretation of residual networks showing that they can be seen as a collection of many paths of differing length. Moreover, residual networks seem to enable very deep networks by leveraging only the short paths during training. To support this observation, we rewrite residual networks as an explicit collection of paths. Unlike traditional models, paths through residual networks vary in length. Further, a lesion study reveals that these paths show ensemble-like behavior in the sense that they do not strongly depend on each other. Finally, and most surprising, most paths are shorter than one might expect, and only the short paths are needed during training, as longer paths do not contribute any gradient. For example, most of the gradient in a residual network with 110 layers comes from paths that are only 10-34 layers deep. Our results reveal one of the key characteristics that seem to enable the training of very deep networks: Residual networks avoid the vanishing gradient problem by introducing short paths which can carry gradient throughout the extent of very deep networks.
http://arxiv.org/pdf/1605.06431
Andreas Veit, Michael Wilber, Serge Belongie
cs.CV, cs.AI, cs.LG, cs.NE
NIPS 2016
null
cs.CV
20160520
20161027
[ { "id": "1603.09382" }, { "id": "1512.03385" }, { "id": "1603.05027" }, { "id": "1505.00387" } ]
1605.06431
1
# Introduction Most modern computer vision systems follow a familiar architecture, processing inputs from low- level features up to task specific high-level features. Recently proposed residual networks [5, 6] challenge this conventional view in three ways. First, they introduce identity skip-connections that bypass residual layers, allowing data to flow from any layers directly to any subsequent layers. This is in stark contrast to the traditional strictly sequential pipeline. Second, skip connections give rise to networks that are two orders of magnitude deeper than previous models, with as many as 1202 layers. This is contrary to architectures like AlexNet [13] and even biological systems [17] that can capture complex concepts within half a dozen layers.1 Third, in initial experiments, we observe that removing single layers from residual networks at test time does not noticeably affect their performance. This is surprising because removing a layer from a traditional architecture such as VGG [18] leads to a dramatic loss in performance.
1605.06431#1
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
In this work we propose a novel interpretation of residual networks showing that they can be seen as a collection of many paths of differing length. Moreover, residual networks seem to enable very deep networks by leveraging only the short paths during training. To support this observation, we rewrite residual networks as an explicit collection of paths. Unlike traditional models, paths through residual networks vary in length. Further, a lesion study reveals that these paths show ensemble-like behavior in the sense that they do not strongly depend on each other. Finally, and most surprising, most paths are shorter than one might expect, and only the short paths are needed during training, as longer paths do not contribute any gradient. For example, most of the gradient in a residual network with 110 layers comes from paths that are only 10-34 layers deep. Our results reveal one of the key characteristics that seem to enable the training of very deep networks: Residual networks avoid the vanishing gradient problem by introducing short paths which can carry gradient throughout the extent of very deep networks.
http://arxiv.org/pdf/1605.06431
Andreas Veit, Michael Wilber, Serge Belongie
cs.CV, cs.AI, cs.LG, cs.NE
NIPS 2016
null
cs.CV
20160520
20161027
[ { "id": "1603.09382" }, { "id": "1512.03385" }, { "id": "1603.05027" }, { "id": "1505.00387" } ]
1605.06431
2
In this work we investigate the impact of these differences. To address the influence of identity skip- connections, we introduce the unraveled view. This novel representation shows residual networks can be viewed as a collection of many paths instead of a single deep network. Further, the perceived resilience of residual networks raises the question whether the paths are dependent on each other or whether they exhibit a degree of redundancy. To find out, we perform a lesion study. The results show ensemble-like behavior in the sense that removing paths from residual networks by deleting layers or corrupting paths by reordering layers only has a modest and smooth impact on performance. Finally, we investigate the depth of residual networks. Unlike traditional models, paths through residual networks vary in length. The distribution of path lengths follows a binomial distribution, meaning 1Making the common assumption that a layer in a neural network corresponds to a cortical area. that the majority of paths in a network with 110 layers are only about 55 layers deep. Moreover, we show most gradient during training comes from paths that are even shorter, i.e., 10-34 layers deep.
1605.06431#2
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
In this work we propose a novel interpretation of residual networks showing that they can be seen as a collection of many paths of differing length. Moreover, residual networks seem to enable very deep networks by leveraging only the short paths during training. To support this observation, we rewrite residual networks as an explicit collection of paths. Unlike traditional models, paths through residual networks vary in length. Further, a lesion study reveals that these paths show ensemble-like behavior in the sense that they do not strongly depend on each other. Finally, and most surprising, most paths are shorter than one might expect, and only the short paths are needed during training, as longer paths do not contribute any gradient. For example, most of the gradient in a residual network with 110 layers comes from paths that are only 10-34 layers deep. Our results reveal one of the key characteristics that seem to enable the training of very deep networks: Residual networks avoid the vanishing gradient problem by introducing short paths which can carry gradient throughout the extent of very deep networks.
http://arxiv.org/pdf/1605.06431
Andreas Veit, Michael Wilber, Serge Belongie
cs.CV, cs.AI, cs.LG, cs.NE
NIPS 2016
null
cs.CV
20160520
20161027
[ { "id": "1603.09382" }, { "id": "1512.03385" }, { "id": "1603.05027" }, { "id": "1505.00387" } ]
1605.06431
3
This reveals a tension. On the one hand, residual network performance improves with adding more and more layers [6]. However, on the other hand, residual networks can be seen as collections of many paths and the only effective paths are relatively shallow. Our results could provide a first explanation: residual networks do not resolve the vanishing gradient problem by preserving gradient flow throughout the entire depth of the network. Rather, they enable very deep networks by shortening the effective paths. For now, short paths still seem necessary to train very deep networks. In this paper we make the following contributions: • We introduce the unraveled view, which illustrates that residual networks can be viewed as a collection of many paths, instead of a single ultra-deep network. • We perform a lesion study to show that these paths do not strongly depend on each other, even though they are trained jointly. Moreover, they exhibit ensemble-like behavior in the sense that their performance smoothly correlates with the number of valid paths. • We investigate the gradient flow through residual networks, revealing that only the short paths contribute gradient during training. Deep paths are not required during training. # 2 Related Work
1605.06431#3
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
In this work we propose a novel interpretation of residual networks showing that they can be seen as a collection of many paths of differing length. Moreover, residual networks seem to enable very deep networks by leveraging only the short paths during training. To support this observation, we rewrite residual networks as an explicit collection of paths. Unlike traditional models, paths through residual networks vary in length. Further, a lesion study reveals that these paths show ensemble-like behavior in the sense that they do not strongly depend on each other. Finally, and most surprising, most paths are shorter than one might expect, and only the short paths are needed during training, as longer paths do not contribute any gradient. For example, most of the gradient in a residual network with 110 layers comes from paths that are only 10-34 layers deep. Our results reveal one of the key characteristics that seem to enable the training of very deep networks: Residual networks avoid the vanishing gradient problem by introducing short paths which can carry gradient throughout the extent of very deep networks.
http://arxiv.org/pdf/1605.06431
Andreas Veit, Michael Wilber, Serge Belongie
cs.CV, cs.AI, cs.LG, cs.NE
NIPS 2016
null
cs.CV
20160520
20161027
[ { "id": "1603.09382" }, { "id": "1512.03385" }, { "id": "1603.05027" }, { "id": "1505.00387" } ]
1605.06431
4
• We investigate the gradient flow through residual networks, revealing that only the short paths contribute gradient during training. Deep paths are not required during training. # 2 Related Work The sequential and hierarchical computer vision pipeline Visual processing has long been un- derstood to follow a hierarchical process from the analysis of simple to complex features. This formalism is based on the discovery of the receptive field [10], which characterizes the visual system as a hierarchical and feedforward system. Neurons in early visual areas have small receptive fields and are sensitive to basic visual features, e.g., edges and bars. Neurons in deeper layers of the hierarchy capture basic shapes, and even deeper neurons respond to full objects. This organization has been widely adopted in the computer vision and machine learning literature, from early neural networks such as the Neocognitron [4] and the traditional hand-crafted feature pipeline of Malik and Perona [15] to convolutional neural networks [13, 14]. The recent strong results of very deep neural networks [18, 20] led to the general perception that it is the depth of neural networks that govern their expressive power and performance. In this work, we show that residual networks do not necessarily follow this tradition.
1605.06431#4
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
In this work we propose a novel interpretation of residual networks showing that they can be seen as a collection of many paths of differing length. Moreover, residual networks seem to enable very deep networks by leveraging only the short paths during training. To support this observation, we rewrite residual networks as an explicit collection of paths. Unlike traditional models, paths through residual networks vary in length. Further, a lesion study reveals that these paths show ensemble-like behavior in the sense that they do not strongly depend on each other. Finally, and most surprising, most paths are shorter than one might expect, and only the short paths are needed during training, as longer paths do not contribute any gradient. For example, most of the gradient in a residual network with 110 layers comes from paths that are only 10-34 layers deep. Our results reveal one of the key characteristics that seem to enable the training of very deep networks: Residual networks avoid the vanishing gradient problem by introducing short paths which can carry gradient throughout the extent of very deep networks.
http://arxiv.org/pdf/1605.06431
Andreas Veit, Michael Wilber, Serge Belongie
cs.CV, cs.AI, cs.LG, cs.NE
NIPS 2016
null
cs.CV
20160520
20161027
[ { "id": "1603.09382" }, { "id": "1512.03385" }, { "id": "1603.05027" }, { "id": "1505.00387" } ]
1605.06431
5
Residual networks [5, 6] are neural networks in which each layer consists of a residual module fi and a skip connection2 bypassing fi. Since layers in residual networks can comprise multiple convolutional layers, we refer to them as residual blocks in the remainder of this paper. For clarity of notation, we omit the initial pre-processing and final classification steps. With yi−1 as is input, the output of the ith block is recursively defined as yi ≡ fi(yi−1) + yi−1, (1) where fi(x) is some sequence of convolutions, batch normalization [11], and Rectified Linear Units (ReLU) as nonlinearities. Figure 1 (a) shows a schematic view of this architecture. In the most recent formulation of residual networks [6], fi(x) is defined by Ale) = Wi-o(B(Wi-o(B(a)))). @) where W; and W/ are weight matrices, - denotes convolution, B(x) is batch normalization and o(x) = max(z,0). Other formulations are typically composed of the same operations, but may differ in their order.
1605.06431#5
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
In this work we propose a novel interpretation of residual networks showing that they can be seen as a collection of many paths of differing length. Moreover, residual networks seem to enable very deep networks by leveraging only the short paths during training. To support this observation, we rewrite residual networks as an explicit collection of paths. Unlike traditional models, paths through residual networks vary in length. Further, a lesion study reveals that these paths show ensemble-like behavior in the sense that they do not strongly depend on each other. Finally, and most surprising, most paths are shorter than one might expect, and only the short paths are needed during training, as longer paths do not contribute any gradient. For example, most of the gradient in a residual network with 110 layers comes from paths that are only 10-34 layers deep. Our results reveal one of the key characteristics that seem to enable the training of very deep networks: Residual networks avoid the vanishing gradient problem by introducing short paths which can carry gradient throughout the extent of very deep networks.
http://arxiv.org/pdf/1605.06431
Andreas Veit, Michael Wilber, Serge Belongie
cs.CV, cs.AI, cs.LG, cs.NE
NIPS 2016
null
cs.CV
20160520
20161027
[ { "id": "1603.09382" }, { "id": "1512.03385" }, { "id": "1603.05027" }, { "id": "1505.00387" } ]
1605.06431
6
The idea of branching paths in neural networks is not new. For example, in the regime of convolutional neural networks, models based on inception modules [20] were among the first to arrange layers in blocks with parallel paths rather than a strict sequential order. We choose residual networks for this study because of their simple design principle. Highway networks Residual networks can be viewed as a special case of highway networks [19]. The output of each layer of a highway network is defined as yi+1 ≡ fi+1(yi) · ti+1(yi) + yi · (1 − ti+1(yi)) (3) 2We only consider identity skip connections, but this framework readily generalizes to more complex projection skip connections when downsampling is required. 2 = (a) Conventional 3-block residual network (b) Unraveled view of (a) Figure 1: Residual Networks are conventionally shown as (a), which is a natural representation of Equation (1). When we expand this formulation to Equation (6), we obtain an unraveled view of a 3-block residual network (b). Circular nodes represent additions. From this view, it is apparent that residual networks have O(2n) implicit paths connecting input and output and that adding a block doubles the number of paths.
1605.06431#6
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
In this work we propose a novel interpretation of residual networks showing that they can be seen as a collection of many paths of differing length. Moreover, residual networks seem to enable very deep networks by leveraging only the short paths during training. To support this observation, we rewrite residual networks as an explicit collection of paths. Unlike traditional models, paths through residual networks vary in length. Further, a lesion study reveals that these paths show ensemble-like behavior in the sense that they do not strongly depend on each other. Finally, and most surprising, most paths are shorter than one might expect, and only the short paths are needed during training, as longer paths do not contribute any gradient. For example, most of the gradient in a residual network with 110 layers comes from paths that are only 10-34 layers deep. Our results reveal one of the key characteristics that seem to enable the training of very deep networks: Residual networks avoid the vanishing gradient problem by introducing short paths which can carry gradient throughout the extent of very deep networks.
http://arxiv.org/pdf/1605.06431
Andreas Veit, Michael Wilber, Serge Belongie
cs.CV, cs.AI, cs.LG, cs.NE
NIPS 2016
null
cs.CV
20160520
20161027
[ { "id": "1603.09382" }, { "id": "1512.03385" }, { "id": "1603.05027" }, { "id": "1505.00387" } ]
1605.06431
7
This follows the same structure as Equation (1). Highway networks also contain residual modules and skip connections that bypass them. However, the output of each path is attenuated by a gating function t, which has learned parameters and is dependent on its input. Highway networks are equivalent to residual networks when ti(·) = 0.5, in which case data flows equally through both paths. Given an omnipotent solver, highway networks could learn whether each residual module should affect the data. This introduces more parameters and more complexity. Investigating neural networks Several investigative studies seek to better understand convolutional neural networks. For example, Zeiler and Fergus [23] visualize convolutional filters to unveil the concepts learned by individual neurons. Further, Szegedy et al. [21] investigate the function learned by neural networks and how small changes in the input called adversarial examples can lead to large changes in the output. Within this stream of research, the closest study to our work is from Yosinski et al. [22], which performs lesion studies on AlexNet. They discover that early layers exhibit little co-adaptation and later layers have more co-adaptation. These papers, along with ours, have the common thread of exploring specific aspects of neural network performance. In our study, we focus our investigation on structural properties of neural networks.
1605.06431#7
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
In this work we propose a novel interpretation of residual networks showing that they can be seen as a collection of many paths of differing length. Moreover, residual networks seem to enable very deep networks by leveraging only the short paths during training. To support this observation, we rewrite residual networks as an explicit collection of paths. Unlike traditional models, paths through residual networks vary in length. Further, a lesion study reveals that these paths show ensemble-like behavior in the sense that they do not strongly depend on each other. Finally, and most surprising, most paths are shorter than one might expect, and only the short paths are needed during training, as longer paths do not contribute any gradient. For example, most of the gradient in a residual network with 110 layers comes from paths that are only 10-34 layers deep. Our results reveal one of the key characteristics that seem to enable the training of very deep networks: Residual networks avoid the vanishing gradient problem by introducing short paths which can carry gradient throughout the extent of very deep networks.
http://arxiv.org/pdf/1605.06431
Andreas Veit, Michael Wilber, Serge Belongie
cs.CV, cs.AI, cs.LG, cs.NE
NIPS 2016
null
cs.CV
20160520
20161027
[ { "id": "1603.09382" }, { "id": "1512.03385" }, { "id": "1603.05027" }, { "id": "1505.00387" } ]
1605.06431
8
Ensembling Since the early days of neural networks, researchers have used simple ensembling techniques to improve performance. Though boosting has been used in the past [16], one simple approach is to arrange a committee [3] of neural networks in a simple voting scheme, where the final output predictions are averaged. Top performers in several competitions use this technique almost as an afterthought [6, 13, 18]. Generally, one key characteristic of ensembles is their smooth performance with respect to the number of members. In particular, the performance increase from additional ensemble members gets smaller with increasing ensemble size. Even though they are not strict ensembles, we show that residual networks behave similarly. Dropout Hinton et al. [7] show that dropping out individual neurons during training leads to a network that is equivalent to averaging over an ensemble of exponentially many networks. Similar in spirit, stochastic depth [9] trains an ensemble of networks by dropping out entire layers during training. In this work, we show that one does not need a special training strategy such as stochastic depth to drop out layers. Entire layers can be removed from plain residual networks without impacting performance, indicating that they do not strongly depend on each other. # 3 The unraveled view of residual networks
1605.06431#8
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
In this work we propose a novel interpretation of residual networks showing that they can be seen as a collection of many paths of differing length. Moreover, residual networks seem to enable very deep networks by leveraging only the short paths during training. To support this observation, we rewrite residual networks as an explicit collection of paths. Unlike traditional models, paths through residual networks vary in length. Further, a lesion study reveals that these paths show ensemble-like behavior in the sense that they do not strongly depend on each other. Finally, and most surprising, most paths are shorter than one might expect, and only the short paths are needed during training, as longer paths do not contribute any gradient. For example, most of the gradient in a residual network with 110 layers comes from paths that are only 10-34 layers deep. Our results reveal one of the key characteristics that seem to enable the training of very deep networks: Residual networks avoid the vanishing gradient problem by introducing short paths which can carry gradient throughout the extent of very deep networks.
http://arxiv.org/pdf/1605.06431
Andreas Veit, Michael Wilber, Serge Belongie
cs.CV, cs.AI, cs.LG, cs.NE
NIPS 2016
null
cs.CV
20160520
20161027
[ { "id": "1603.09382" }, { "id": "1512.03385" }, { "id": "1603.05027" }, { "id": "1505.00387" } ]
1605.06431
9
# 3 The unraveled view of residual networks To better understand residual networks, we introduce a formulation that makes it easier to reason about their recursive nature. Consider a residual network with three building blocks from input y0 to output y3. Equation (1) gives a recursive definition of residual networks. The output of each stage is based on the combination of two subterms. We can make the shared structure of the residual network apparent by unrolling the recursion into an exponential number of nested terms, expanding one layer 3 (a) Deleting f2 from unraveled view (b) Ordinary feedforward network Figure 2: Deleting a layer in residual networks at test time (a) is equivalent to zeroing half of the paths. In ordinary feed-forward networks (b) such as VGG or AlexNet, deleting individual layers alters the only viable path from input to output. at each substitution step: y3 = y2 + f3(y2) (4) (5) [yi + fo(y)] + faQy + fo(yr)) [yo + fi(yo) + fo(yo + filyo))] + [yo + fi(yo) + fo(yo + filyo))] + fa(yo + fi(yo) + fo(yo + fr (yo))) (6)
1605.06431#9
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
In this work we propose a novel interpretation of residual networks showing that they can be seen as a collection of many paths of differing length. Moreover, residual networks seem to enable very deep networks by leveraging only the short paths during training. To support this observation, we rewrite residual networks as an explicit collection of paths. Unlike traditional models, paths through residual networks vary in length. Further, a lesion study reveals that these paths show ensemble-like behavior in the sense that they do not strongly depend on each other. Finally, and most surprising, most paths are shorter than one might expect, and only the short paths are needed during training, as longer paths do not contribute any gradient. For example, most of the gradient in a residual network with 110 layers comes from paths that are only 10-34 layers deep. Our results reveal one of the key characteristics that seem to enable the training of very deep networks: Residual networks avoid the vanishing gradient problem by introducing short paths which can carry gradient throughout the extent of very deep networks.
http://arxiv.org/pdf/1605.06431
Andreas Veit, Michael Wilber, Serge Belongie
cs.CV, cs.AI, cs.LG, cs.NE
NIPS 2016
null
cs.CV
20160520
20161027
[ { "id": "1603.09382" }, { "id": "1512.03385" }, { "id": "1603.05027" }, { "id": "1505.00387" } ]
1605.06431
10
[yo + fi(yo) + fo(yo + filyo))] + fa(yo + fi(yo) + fo(yo + fr (yo))) (6) We illustrate this expression tree graphically in Figure 1 (b). With subscripts in the function modules indicating weight sharing, this graph is equivalent to the original formulation of residual networks. The graph makes clear that data flows along many paths from input to output. Each path is a unique configuration of which residual module to enter and which to skip. Conceivably, each unique path through the network can be indexed by a binary code b ∈ {0, 1}n where bi = 1 iff the input flows through residual module fi and 0 if fi is skipped. It follows that residual networks have 2n paths connecting input to output layers. In the classical visual hierarchy, each layer of processing depends only on the output of the previous layer. Residual networks cannot strictly follow this pattern because of their inherent structure. Each module fi(·) in the residual network is fed data from a mixture of 2i−1 different distributions generated from every possible configuration of the previous i − 1 residual modules.
1605.06431#10
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
In this work we propose a novel interpretation of residual networks showing that they can be seen as a collection of many paths of differing length. Moreover, residual networks seem to enable very deep networks by leveraging only the short paths during training. To support this observation, we rewrite residual networks as an explicit collection of paths. Unlike traditional models, paths through residual networks vary in length. Further, a lesion study reveals that these paths show ensemble-like behavior in the sense that they do not strongly depend on each other. Finally, and most surprising, most paths are shorter than one might expect, and only the short paths are needed during training, as longer paths do not contribute any gradient. For example, most of the gradient in a residual network with 110 layers comes from paths that are only 10-34 layers deep. Our results reveal one of the key characteristics that seem to enable the training of very deep networks: Residual networks avoid the vanishing gradient problem by introducing short paths which can carry gradient throughout the extent of very deep networks.
http://arxiv.org/pdf/1605.06431
Andreas Veit, Michael Wilber, Serge Belongie
cs.CV, cs.AI, cs.LG, cs.NE
NIPS 2016
null
cs.CV
20160520
20161027
[ { "id": "1603.09382" }, { "id": "1512.03385" }, { "id": "1603.05027" }, { "id": "1505.00387" } ]
1605.06431
11
Compare this to a strictly sequential network such as VGG or AlexNet, depicted conceptually in Figure 2 (b). In these networks, input always flows from the first layer straight through to the last in a single path. Written out, the output of a three-layer feed-forward network is 3 = f F F yF F 3 (f F F 2 (f F F 1 (y0))) (7) (x) is typically a convolution followed by batch normalization and ReLU. In these is only fed data from a single path configuration, the output of f F F It is worthwhile to note that ordinary feed-forward neural networks can also be “unraveled” using the above thought process at the level of individual neurons rather than layers. This renders the network as a collection of different paths, where each path is a unique configuration of neurons from each layer connecting input to output. Thus, all paths through ordinary neural networks are of the same length. However, paths in residual networks have varying length. Further, each path in a residual network goes through a different subset of layers.
1605.06431#11
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
In this work we propose a novel interpretation of residual networks showing that they can be seen as a collection of many paths of differing length. Moreover, residual networks seem to enable very deep networks by leveraging only the short paths during training. To support this observation, we rewrite residual networks as an explicit collection of paths. Unlike traditional models, paths through residual networks vary in length. Further, a lesion study reveals that these paths show ensemble-like behavior in the sense that they do not strongly depend on each other. Finally, and most surprising, most paths are shorter than one might expect, and only the short paths are needed during training, as longer paths do not contribute any gradient. For example, most of the gradient in a residual network with 110 layers comes from paths that are only 10-34 layers deep. Our results reveal one of the key characteristics that seem to enable the training of very deep networks: Residual networks avoid the vanishing gradient problem by introducing short paths which can carry gradient throughout the extent of very deep networks.
http://arxiv.org/pdf/1605.06431
Andreas Veit, Michael Wilber, Serge Belongie
cs.CV, cs.AI, cs.LG, cs.NE
NIPS 2016
null
cs.CV
20160520
20161027
[ { "id": "1603.09382" }, { "id": "1512.03385" }, { "id": "1603.05027" }, { "id": "1505.00387" } ]
1605.06431
12
Based on these observations, we formulate the following questions and address them in our experi- ments below. Are the paths in residual networks dependent on each other or do they exhibit a degree of redundancy? If the paths do not strongly depend on each other, do they behave like an ensemble? Do paths of varying lengths impact the network differently? # 4 Lesion study In this section, we use three lesion studies to show that paths in residual networks do not strongly depend on each other and that they behave like an ensemble. All experiments are performed at test 4 # Test classification error Test error when dropping any single block from residual network vs. VGG on CIFAR-10 n~vVV residual network v2, 110 laye! VGG network, 15 layers residual network baseline VGG network baseline ° 10 20 30 40 50 dropped layer index Top-1 error when dropping any single block from 200-layer residual network on ImageNet — residual network v2, 200 laye! residual network baseline top 1 error 0.0 0 10 20 30 40 50 60 dropped layer index Figure 4: Results when dropping individual blocks from residual networks trained on Ima- geNet are similar to CIFAR results. However, downsampling layers tend to have more impact on ImageNet.
1605.06431#12
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
In this work we propose a novel interpretation of residual networks showing that they can be seen as a collection of many paths of differing length. Moreover, residual networks seem to enable very deep networks by leveraging only the short paths during training. To support this observation, we rewrite residual networks as an explicit collection of paths. Unlike traditional models, paths through residual networks vary in length. Further, a lesion study reveals that these paths show ensemble-like behavior in the sense that they do not strongly depend on each other. Finally, and most surprising, most paths are shorter than one might expect, and only the short paths are needed during training, as longer paths do not contribute any gradient. For example, most of the gradient in a residual network with 110 layers comes from paths that are only 10-34 layers deep. Our results reveal one of the key characteristics that seem to enable the training of very deep networks: Residual networks avoid the vanishing gradient problem by introducing short paths which can carry gradient throughout the extent of very deep networks.
http://arxiv.org/pdf/1605.06431
Andreas Veit, Michael Wilber, Serge Belongie
cs.CV, cs.AI, cs.LG, cs.NE
NIPS 2016
null
cs.CV
20160520
20161027
[ { "id": "1603.09382" }, { "id": "1512.03385" }, { "id": "1603.05027" }, { "id": "1505.00387" } ]
1605.06431
13
Figure 3: Deleting individual layers from VGG and a residual network on CIFAR-10. VGG per- formance drops to random chance when any one of its layers is deleted, but deleting individual modules from residual networks has a minimal impact on performance. Removing downsam- pling modules has a slightly higher impact. time on CIFAR-10 [12]. Experiments on ImageNet [2] show comparable results. We train residual networks with the standard training strategy, dataset augmentation, and learning rate policy, [6]. For our CIFAR-10 experiments, we train a 110-layer (54-module) residual network with modules of the “pre-activation” type which contain batch normalization as first step. For ImageNet we use 200 layers (66 modules). It is important to note that we did not use any special training strategy to adapt the network. In particular, we did not use any perturbations such as stochastic depth during training. # 4.1 Experiment: Deleting individual layers from neural networks at test time
1605.06431#13
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
In this work we propose a novel interpretation of residual networks showing that they can be seen as a collection of many paths of differing length. Moreover, residual networks seem to enable very deep networks by leveraging only the short paths during training. To support this observation, we rewrite residual networks as an explicit collection of paths. Unlike traditional models, paths through residual networks vary in length. Further, a lesion study reveals that these paths show ensemble-like behavior in the sense that they do not strongly depend on each other. Finally, and most surprising, most paths are shorter than one might expect, and only the short paths are needed during training, as longer paths do not contribute any gradient. For example, most of the gradient in a residual network with 110 layers comes from paths that are only 10-34 layers deep. Our results reveal one of the key characteristics that seem to enable the training of very deep networks: Residual networks avoid the vanishing gradient problem by introducing short paths which can carry gradient throughout the extent of very deep networks.
http://arxiv.org/pdf/1605.06431
Andreas Veit, Michael Wilber, Serge Belongie
cs.CV, cs.AI, cs.LG, cs.NE
NIPS 2016
null
cs.CV
20160520
20161027
[ { "id": "1603.09382" }, { "id": "1512.03385" }, { "id": "1603.05027" }, { "id": "1505.00387" } ]
1605.06431
14
# 4.1 Experiment: Deleting individual layers from neural networks at test time As a motivating experiment, we will show that not all transformations within a residual network are necessary by deleting individual modules from the neural network after it has been fully trained. To do so, we remove the residual module from a single building block, leaving the skip connection (or downsampling projection, if any) untouched. That is, we change y; = yi-1 + fi(yi—1) to yf = yi-1- We can measure the importance of each building block by varying which residual module we remove. To compare to conventional convolutional neural networks, we train a VGG network with 15 layers, setting the number of channels to 128 for all layers to allow the removal of any layer. It is unclear whether any neural network can withstand such a drastic change to the model structure. We expect them to break because dropping any layer drastically changes the input distribution of all subsequent layers.
1605.06431#14
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
In this work we propose a novel interpretation of residual networks showing that they can be seen as a collection of many paths of differing length. Moreover, residual networks seem to enable very deep networks by leveraging only the short paths during training. To support this observation, we rewrite residual networks as an explicit collection of paths. Unlike traditional models, paths through residual networks vary in length. Further, a lesion study reveals that these paths show ensemble-like behavior in the sense that they do not strongly depend on each other. Finally, and most surprising, most paths are shorter than one might expect, and only the short paths are needed during training, as longer paths do not contribute any gradient. For example, most of the gradient in a residual network with 110 layers comes from paths that are only 10-34 layers deep. Our results reveal one of the key characteristics that seem to enable the training of very deep networks: Residual networks avoid the vanishing gradient problem by introducing short paths which can carry gradient throughout the extent of very deep networks.
http://arxiv.org/pdf/1605.06431
Andreas Veit, Michael Wilber, Serge Belongie
cs.CV, cs.AI, cs.LG, cs.NE
NIPS 2016
null
cs.CV
20160520
20161027
[ { "id": "1603.09382" }, { "id": "1512.03385" }, { "id": "1603.05027" }, { "id": "1505.00387" } ]
1605.06431
15
It is unclear whether any neural network can withstand such a drastic change to the model structure. We expect them to break because dropping any layer drastically changes the input distribution of all subsequent layers. The results are shown in Figure 3. As expected, deleting any layer in VGG reduces performance to chance levels. Surprisingly, this is not the case for residual networks. Removing downsampling blocks does have a modest impact on performance (peaks in Figure 3 correspond to downsampling building blocks), but no other block removal lead to a noticeable change. This result shows that to some extent, the structure of a residual network can be changed at runtime without affecting performance. Experiments on ImageNet show comparable results, as seen in Figure 4. Why are residual networks resilient to dropping layers but VGG is not? Expressing residual networks in the unraveled view provides a first insight. It shows that residual networks can be seen as a collection of many paths. As illustrated in Figure 2 (a), when a layer is removed, the number of paths is reduced from 2n to 2n−1, leaving half the number of paths valid. VGG only contains a single usable path from input to output. Thus, when a single layer is removed, the only viable path is corrupted. This result suggests that paths in a residual network do not strongly depend on each other although they are trained jointly. # 4.2 Experiment: Deleting many modules from residual networks at test-time
1605.06431#15
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
In this work we propose a novel interpretation of residual networks showing that they can be seen as a collection of many paths of differing length. Moreover, residual networks seem to enable very deep networks by leveraging only the short paths during training. To support this observation, we rewrite residual networks as an explicit collection of paths. Unlike traditional models, paths through residual networks vary in length. Further, a lesion study reveals that these paths show ensemble-like behavior in the sense that they do not strongly depend on each other. Finally, and most surprising, most paths are shorter than one might expect, and only the short paths are needed during training, as longer paths do not contribute any gradient. For example, most of the gradient in a residual network with 110 layers comes from paths that are only 10-34 layers deep. Our results reveal one of the key characteristics that seem to enable the training of very deep networks: Residual networks avoid the vanishing gradient problem by introducing short paths which can carry gradient throughout the extent of very deep networks.
http://arxiv.org/pdf/1605.06431
Andreas Veit, Michael Wilber, Serge Belongie
cs.CV, cs.AI, cs.LG, cs.NE
NIPS 2016
null
cs.CV
20160520
20161027
[ { "id": "1603.09382" }, { "id": "1512.03385" }, { "id": "1603.05027" }, { "id": "1505.00387" } ]
1605.06431
16
# 4.2 Experiment: Deleting many modules from residual networks at test-time Having shown that paths do not strongly depend on each other, we investigate whether the collection of paths shows ensemble-like behavior. One key characteristic of ensembles is that their performance 5 Error when deleting layers 09 09 -4 08 Error when permuting layers 7 = 08 i = 1 1 1 07 07 il L 03 1 1 1 1 06 1 1 1 1 1 1 1 1 05 8 aise 0.0 }----4 }---4 —n 1 : ! t + 02 T f r oF od riot 0.0 123 45 6 7 8 9 1011121314151617181920 1.0 0.98 0.96 094 092 09 0.88 0.86 0.84 Number of layers deleted Kendall Tau correlation (a) (b) Figure 5: (a) Error increases smoothly when randomly deleting several modules from a residual network. (b) Error also increases smoothly when re-ordering a residual network by shuffling building blocks. The degree of reordering is measured by the Kendall Tau correlation coefficient. These results are similar to what one would expect from ensembles.
1605.06431#16
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
In this work we propose a novel interpretation of residual networks showing that they can be seen as a collection of many paths of differing length. Moreover, residual networks seem to enable very deep networks by leveraging only the short paths during training. To support this observation, we rewrite residual networks as an explicit collection of paths. Unlike traditional models, paths through residual networks vary in length. Further, a lesion study reveals that these paths show ensemble-like behavior in the sense that they do not strongly depend on each other. Finally, and most surprising, most paths are shorter than one might expect, and only the short paths are needed during training, as longer paths do not contribute any gradient. For example, most of the gradient in a residual network with 110 layers comes from paths that are only 10-34 layers deep. Our results reveal one of the key characteristics that seem to enable the training of very deep networks: Residual networks avoid the vanishing gradient problem by introducing short paths which can carry gradient throughout the extent of very deep networks.
http://arxiv.org/pdf/1605.06431
Andreas Veit, Michael Wilber, Serge Belongie
cs.CV, cs.AI, cs.LG, cs.NE
NIPS 2016
null
cs.CV
20160520
20161027
[ { "id": "1603.09382" }, { "id": "1512.03385" }, { "id": "1603.05027" }, { "id": "1505.00387" } ]
1605.06431
17
depends smoothly on the number of members. If the collection of paths were to behave like an ensemble, we would expect test-time performance of residual networks to smoothly correlate with the number of valid paths. This is indeed what we observe: deleting increasing numbers of residual modules, increases error smoothly, Figure 5 (a). This implies residual networks behave like ensembles. When deleting k residual modules from a network originally of length n, the number of valid paths decreases to O(2n−k). For example, the original network started with 54 building blocks, so deleting 10 blocks leaves 244 paths. Though the collection is now a factor of roughly 10−6 of its original size, there are still many valid paths and error remains around 0.2. # 4.3 Experiment: Reordering modules in residual networks at test-time Our previous experiments were only about dropping layers, which have the effect of removing paths from the network. In this experiment, we consider changing the structure of the network by re-ordering the building blocks. This has the effect of removing some paths and inserting new paths that have never been seen by the network during training. In particular, it moves high-level transformations before low-level transformations.
1605.06431#17
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
In this work we propose a novel interpretation of residual networks showing that they can be seen as a collection of many paths of differing length. Moreover, residual networks seem to enable very deep networks by leveraging only the short paths during training. To support this observation, we rewrite residual networks as an explicit collection of paths. Unlike traditional models, paths through residual networks vary in length. Further, a lesion study reveals that these paths show ensemble-like behavior in the sense that they do not strongly depend on each other. Finally, and most surprising, most paths are shorter than one might expect, and only the short paths are needed during training, as longer paths do not contribute any gradient. For example, most of the gradient in a residual network with 110 layers comes from paths that are only 10-34 layers deep. Our results reveal one of the key characteristics that seem to enable the training of very deep networks: Residual networks avoid the vanishing gradient problem by introducing short paths which can carry gradient throughout the extent of very deep networks.
http://arxiv.org/pdf/1605.06431
Andreas Veit, Michael Wilber, Serge Belongie
cs.CV, cs.AI, cs.LG, cs.NE
NIPS 2016
null
cs.CV
20160520
20161027
[ { "id": "1603.09382" }, { "id": "1512.03385" }, { "id": "1603.05027" }, { "id": "1505.00387" } ]
1605.06431
18
To re-order the network, we swap k randomly sampled pairs of building blocks with compatible dimensionality, ignoring modules that perform downsampling. We graph error with respect to the Kendall Tau rank correlation coefficient which measures the amount of corruption. The results are shown in Figure 5 (b). As corruption increases, the error smoothly increases as well. This result is surprising because it suggests that residual networks can be reconfigured to some extent at runtime. # 5 The importance of short paths in residual networks Now that we have seen that there are many paths through residual networks and that they do not necessarily depend on each other, we investigate their characteristics. Distribution of path lengths Not all paths through residual networks are of the same length. For example, there is precisely one path that goes through all modules and n paths that go only through a single module. From this reasoning, the distribution of all possible path lengths through a residual network follows a Binomial distribution. Thus, we know that the path lengths are closely centered around the mean of n/2. Figure 6 (a) shows the path length distribution for a residual network with 54 modules; more than 95% of paths go through 19 to 35 modules.
1605.06431#18
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
In this work we propose a novel interpretation of residual networks showing that they can be seen as a collection of many paths of differing length. Moreover, residual networks seem to enable very deep networks by leveraging only the short paths during training. To support this observation, we rewrite residual networks as an explicit collection of paths. Unlike traditional models, paths through residual networks vary in length. Further, a lesion study reveals that these paths show ensemble-like behavior in the sense that they do not strongly depend on each other. Finally, and most surprising, most paths are shorter than one might expect, and only the short paths are needed during training, as longer paths do not contribute any gradient. For example, most of the gradient in a residual network with 110 layers comes from paths that are only 10-34 layers deep. Our results reveal one of the key characteristics that seem to enable the training of very deep networks: Residual networks avoid the vanishing gradient problem by introducing short paths which can carry gradient throughout the extent of very deep networks.
http://arxiv.org/pdf/1605.06431
Andreas Veit, Michael Wilber, Serge Belongie
cs.CV, cs.AI, cs.LG, cs.NE
NIPS 2016
null
cs.CV
20160520
20161027
[ { "id": "1603.09382" }, { "id": "1512.03385" }, { "id": "1603.05027" }, { "id": "1505.00387" } ]
1605.06431
19
Vanishing gradients in residual networks Generally, data flows along all paths in residual networks. However, not all paths carry the same amount of gradient. In particular, the length of the paths through the network affects the gradient magnitude during backpropagation [1, 8]. To empirically investigate the effect of vanishing gradients on residual networks we perform the following experiment. Starting from a trained network with 54 blocks, we sample individual paths of a certain length and measure the norm of the gradient that arrives at the input. To sample a path of length k, we first feed a batch forward through the whole network. During the backward pass, we randomly sample k residual 6 (a) (c)
1605.06431#19
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
In this work we propose a novel interpretation of residual networks showing that they can be seen as a collection of many paths of differing length. Moreover, residual networks seem to enable very deep networks by leveraging only the short paths during training. To support this observation, we rewrite residual networks as an explicit collection of paths. Unlike traditional models, paths through residual networks vary in length. Further, a lesion study reveals that these paths show ensemble-like behavior in the sense that they do not strongly depend on each other. Finally, and most surprising, most paths are shorter than one might expect, and only the short paths are needed during training, as longer paths do not contribute any gradient. For example, most of the gradient in a residual network with 110 layers comes from paths that are only 10-34 layers deep. Our results reveal one of the key characteristics that seem to enable the training of very deep networks: Residual networks avoid the vanishing gradient problem by introducing short paths which can carry gradient throughout the extent of very deep networks.
http://arxiv.org/pdf/1605.06431
Andreas Veit, Michael Wilber, Serge Belongie
cs.CV, cs.AI, cs.LG, cs.NE
NIPS 2016
null
cs.CV
20160520
20161027
[ { "id": "1603.09382" }, { "id": "1512.03385" }, { "id": "1603.05027" }, { "id": "1505.00387" } ]
1605.06431
20
6 (a) (c) (b) Figure 6: How much gradient do the paths of different lengths contribute in a residual network? To find out, we first show the distribution of all possible path lengths (a). This follows a Binomial distribution. Second, we record how much gradient is induced on the first layer of the network through paths of varying length (b), which appears to decay roughly exponentially with the number of modules the gradient passes through. Finally, we can multiply these two functions (c) to show how much gradient comes from all paths of a certain length. Though there are many paths of medium length, paths longer than ∼20 modules are generally too long to contribute noticeable gradient during training. This suggests that the effective paths in residual networks are relatively shallow. blocks. For those k blocks, we only propagate through the residual module; for the remaining n − k blocks, we only propagate through the skip connection. Thus, we only measure gradients that flow through the single path of length k. We sample 1,000 measurements for each length k using random batches from the training set. The results show that the gradient magnitude of a path decreases exponentially with the number of modules it went through in the backward pass, Figure 6 (b).
1605.06431#20
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
In this work we propose a novel interpretation of residual networks showing that they can be seen as a collection of many paths of differing length. Moreover, residual networks seem to enable very deep networks by leveraging only the short paths during training. To support this observation, we rewrite residual networks as an explicit collection of paths. Unlike traditional models, paths through residual networks vary in length. Further, a lesion study reveals that these paths show ensemble-like behavior in the sense that they do not strongly depend on each other. Finally, and most surprising, most paths are shorter than one might expect, and only the short paths are needed during training, as longer paths do not contribute any gradient. For example, most of the gradient in a residual network with 110 layers comes from paths that are only 10-34 layers deep. Our results reveal one of the key characteristics that seem to enable the training of very deep networks: Residual networks avoid the vanishing gradient problem by introducing short paths which can carry gradient throughout the extent of very deep networks.
http://arxiv.org/pdf/1605.06431
Andreas Veit, Michael Wilber, Serge Belongie
cs.CV, cs.AI, cs.LG, cs.NE
NIPS 2016
null
cs.CV
20160520
20161027
[ { "id": "1603.09382" }, { "id": "1512.03385" }, { "id": "1603.05027" }, { "id": "1505.00387" } ]
1605.06431
21
The effective paths in residual networks are relatively shallow Finally, we can use these results to deduce whether shorter or longer paths contribute most of the gradient during training. To find the total gradient magnitude contributed by paths of each length, we multiply the frequency of each path length with the expected gradient magnitude. The result is shown in Figure 6 (c). Surprisingly, almost all of the gradient updates during training come from paths between 5 and 17 modules long. These are the effective paths, even though they constitute only 0.45% of all paths through this network. Moreover, in comparison to the total length of the network, the effective paths are relatively shallow.
1605.06431#21
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
In this work we propose a novel interpretation of residual networks showing that they can be seen as a collection of many paths of differing length. Moreover, residual networks seem to enable very deep networks by leveraging only the short paths during training. To support this observation, we rewrite residual networks as an explicit collection of paths. Unlike traditional models, paths through residual networks vary in length. Further, a lesion study reveals that these paths show ensemble-like behavior in the sense that they do not strongly depend on each other. Finally, and most surprising, most paths are shorter than one might expect, and only the short paths are needed during training, as longer paths do not contribute any gradient. For example, most of the gradient in a residual network with 110 layers comes from paths that are only 10-34 layers deep. Our results reveal one of the key characteristics that seem to enable the training of very deep networks: Residual networks avoid the vanishing gradient problem by introducing short paths which can carry gradient throughout the extent of very deep networks.
http://arxiv.org/pdf/1605.06431
Andreas Veit, Michael Wilber, Serge Belongie
cs.CV, cs.AI, cs.LG, cs.NE
NIPS 2016
null
cs.CV
20160520
20161027
[ { "id": "1603.09382" }, { "id": "1512.03385" }, { "id": "1603.05027" }, { "id": "1505.00387" } ]
1605.06431
22
To validate this result, we retrain a residual network from scratch that only sees the effective paths during training. This ensures that no long path is ever used. If the retrained model is able to perform competitively compared to training the full network, we know that long paths in residual networks are not needed during training. We achieve this by only training a subset of the modules during each mini batch. In particular, we choose the number of modules such that the distribution of paths during training aligns with the distribution of the effective paths in the whole network. For the network with 54 modules, this means we sample exactly 23 modules during each training batch. Then, the path lengths during training are centered around 11.5 modules, well aligned with the effective paths. In our experiment, the network trained only with the effective paths achieves a 5.96% error rate, whereas the full model achieves a 6.10% error rate. There is no statistically significant difference. This demonstrates that indeed only the effective paths are needed. # 6 Discussion Removing residual modules mostly removes long paths Deleting a module from a residual network mainly removes the long paths through the network. In particular, when deleting d residual modules from a network of length n, the fraction of paths remaining per path length x is given by rw) (7) (8) fraction of remaining paths of length x =
1605.06431#22
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
In this work we propose a novel interpretation of residual networks showing that they can be seen as a collection of many paths of differing length. Moreover, residual networks seem to enable very deep networks by leveraging only the short paths during training. To support this observation, we rewrite residual networks as an explicit collection of paths. Unlike traditional models, paths through residual networks vary in length. Further, a lesion study reveals that these paths show ensemble-like behavior in the sense that they do not strongly depend on each other. Finally, and most surprising, most paths are shorter than one might expect, and only the short paths are needed during training, as longer paths do not contribute any gradient. For example, most of the gradient in a residual network with 110 layers comes from paths that are only 10-34 layers deep. Our results reveal one of the key characteristics that seem to enable the training of very deep networks: Residual networks avoid the vanishing gradient problem by introducing short paths which can carry gradient throughout the extent of very deep networks.
http://arxiv.org/pdf/1605.06431
Andreas Veit, Michael Wilber, Serge Belongie
cs.CV, cs.AI, cs.LG, cs.NE
NIPS 2016
null
cs.CV
20160520
20161027
[ { "id": "1603.09382" }, { "id": "1512.03385" }, { "id": "1603.05027" }, { "id": "1505.00387" } ]
1605.06431
23
rw) (7) (8) fraction of remaining paths of length x = Figure 7 illustrates the fraction of remaining paths after deleting 1, 10 and 20 modules from a 54 module network. It becomes apparent that the deletion of residual modules mostly affects the long paths. Even after deleting 10 residual modules, many of the effective paths between 5 and 17 modules long are still valid. Since mainly the effective paths are important for performance, this result is in line with the experiment shown in Figure 5 (a). Performance only drops slightly up to the removal of 10 residual modules, however, for the removal of 20 modules, we observe a severe drop in performance. 7 remaining paths after deleting d modules Residual network vs. stochastic depth error when dropping any single block — delete 1 module — residual network v2, 110 layers — delete 10 modules| — stochastic depth, 110 layers, d = 0.5 linear deca: — delete 20 modules| effective paths fraction of remaining paths ° “10 20 30 0 50 path length dropped layer index 20 30 40 50 # (CIFAR-1 Figure 7: Fraction of paths remain- ing after deleting individual layers. Deleting layers mostly affects long paths through the networks. _
1605.06431#23
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
In this work we propose a novel interpretation of residual networks showing that they can be seen as a collection of many paths of differing length. Moreover, residual networks seem to enable very deep networks by leveraging only the short paths during training. To support this observation, we rewrite residual networks as an explicit collection of paths. Unlike traditional models, paths through residual networks vary in length. Further, a lesion study reveals that these paths show ensemble-like behavior in the sense that they do not strongly depend on each other. Finally, and most surprising, most paths are shorter than one might expect, and only the short paths are needed during training, as longer paths do not contribute any gradient. For example, most of the gradient in a residual network with 110 layers comes from paths that are only 10-34 layers deep. Our results reveal one of the key characteristics that seem to enable the training of very deep networks: Residual networks avoid the vanishing gradient problem by introducing short paths which can carry gradient throughout the extent of very deep networks.
http://arxiv.org/pdf/1605.06431
Andreas Veit, Michael Wilber, Serge Belongie
cs.CV, cs.AI, cs.LG, cs.NE
NIPS 2016
null
cs.CV
20160520
20161027
[ { "id": "1603.09382" }, { "id": "1512.03385" }, { "id": "1603.05027" }, { "id": "1505.00387" } ]
1605.06431
24
# (CIFAR-1 Figure 7: Fraction of paths remain- ing after deleting individual layers. Deleting layers mostly affects long paths through the networks. _ Figure 8: Impact of stochastic depth on resilience to layer deletion. Training with stochastic depth only improves re- silience slightly, indicating that plain residual networks al- ready don’t depend on individual layers. Compare to Fig. 3. Connection to highway networks In highway networks, ti(·) multiplexes data flow through the residual and skip connections and ti(·) = 0.5 means both paths are used equally. For highway networks in the wild, [19] observe empirically that the gates commonly deviate from ti(·) = 0.5. In particular, they tend to be biased toward sending data through the skip connection; in other words, the network learns to use short paths. Similar to our results, it reinforces the importance of short paths.
1605.06431#24
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
In this work we propose a novel interpretation of residual networks showing that they can be seen as a collection of many paths of differing length. Moreover, residual networks seem to enable very deep networks by leveraging only the short paths during training. To support this observation, we rewrite residual networks as an explicit collection of paths. Unlike traditional models, paths through residual networks vary in length. Further, a lesion study reveals that these paths show ensemble-like behavior in the sense that they do not strongly depend on each other. Finally, and most surprising, most paths are shorter than one might expect, and only the short paths are needed during training, as longer paths do not contribute any gradient. For example, most of the gradient in a residual network with 110 layers comes from paths that are only 10-34 layers deep. Our results reveal one of the key characteristics that seem to enable the training of very deep networks: Residual networks avoid the vanishing gradient problem by introducing short paths which can carry gradient throughout the extent of very deep networks.
http://arxiv.org/pdf/1605.06431
Andreas Veit, Michael Wilber, Serge Belongie
cs.CV, cs.AI, cs.LG, cs.NE
NIPS 2016
null
cs.CV
20160520
20161027
[ { "id": "1603.09382" }, { "id": "1512.03385" }, { "id": "1603.05027" }, { "id": "1505.00387" } ]
1605.06431
25
Effect of stochastic depth training procedure Recently, an alternative training procedure for resid- ual networks has been proposed, referred to as stochastic depth [9]. In that approach a random subset of the residual modules is selected for each mini-batch during training. The forward and backward pass is only performed on those modules. Stochastic depth does not affect the number of paths in the network because all paths are available at test time. However, it changes the distribution of paths seen during training. In particular, mainly short paths are seen. Further, by selecting a different subset of short paths in each mini-batch, it encourages the paths to produce good results independently. Does this training procedure significantly reduce the dependence between paths? We repeat the experiment of deleting individual modules for a residual network trained using stochastic depth. The result is shown in Figure 8. Training with stochastic depth improves resilience slightly; only the dependence on the downsampling layers seems to be reduced. By now, this is not surprising: we know that plain residual networks already don’t depend on individual layers. # 7 Conclusion
1605.06431#25
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
In this work we propose a novel interpretation of residual networks showing that they can be seen as a collection of many paths of differing length. Moreover, residual networks seem to enable very deep networks by leveraging only the short paths during training. To support this observation, we rewrite residual networks as an explicit collection of paths. Unlike traditional models, paths through residual networks vary in length. Further, a lesion study reveals that these paths show ensemble-like behavior in the sense that they do not strongly depend on each other. Finally, and most surprising, most paths are shorter than one might expect, and only the short paths are needed during training, as longer paths do not contribute any gradient. For example, most of the gradient in a residual network with 110 layers comes from paths that are only 10-34 layers deep. Our results reveal one of the key characteristics that seem to enable the training of very deep networks: Residual networks avoid the vanishing gradient problem by introducing short paths which can carry gradient throughout the extent of very deep networks.
http://arxiv.org/pdf/1605.06431
Andreas Veit, Michael Wilber, Serge Belongie
cs.CV, cs.AI, cs.LG, cs.NE
NIPS 2016
null
cs.CV
20160520
20161027
[ { "id": "1603.09382" }, { "id": "1512.03385" }, { "id": "1603.05027" }, { "id": "1505.00387" } ]