id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
sequencelengths
1
1
1605.07683#18
Learning End-to-End Goal-Oriented Dialog
All conversations are between native English speakers. We collected around 4k chats to create this extra dataset, denoted Concierge. All conversations have been anonymized by (1) removing all user identiï¬ ers, (2) using the Stanford NER tagger to remove named entities (locations, timestamps, etc.), (3) running some manually deï¬ ned regex to ï¬ lter out any remaining salient information (phone numbers, etc.). The dataset does not contain results from API calls, but still records when operators made use of an external service (Yelp or OpenTable) to gather information. Hence, these have to be predicted, but without any argument (unlike in Task 2). The statistics of Concierge are given in Table 1. The dialogs are shorter than in Tasks 1-6, especially since they do not include results of API calls, but the vocabulary is more diverse and so is the candidate set; the candidate set is made of all utterances of the operator appearing in the training, validation and test sets. Beyond the higher variability of the language used by human operators compared to bots, the dataset offers additional challenges. The set of user requests is much wider, ranging from managing restaurant reservations to asking for recommendations or speciï¬
1605.07683#17
1605.07683#19
1605.07683
[ "1512.05742" ]
1605.07683#19
Learning End-to-End Goal-Oriented Dialog
c information. Users do not always stay focused on the request. API calls are not always used (e.g., the operator might use neither Yelp nor OpenTable to ï¬ nd a restaurant), and facts about restaurants are not structured nor constrained as in a KB. The structure of dialogs is thus much more variable. Users and operators also make typos, spelling and grammar mistakes. 1 Lowe et al. (2016) termed this setting Next-Utterance-Classiï¬ cation. 5
1605.07683#18
1605.07683#20
1605.07683
[ "1512.05742" ]
1605.07683#20
Learning End-to-End Goal-Oriented Dialog
Published as a conference paper at ICLR 2017 # 4 MODELS To demonstrate how to use the dataset and provide baselines, we evaluate several learning methods on our goal-oriented dialog tasks: rule-based systems, classical information retrieval methods, supervised embeddings, and end-to-end Memory networks. 4.1 RULE-BASED SYSTEMS Our tasks T1-T5 are built with a simulator so as to be completely predictable. Thus it is possible to hand-code a rule based system that achieves 100% on them, similar to the bAbI tasks of Weston et al. (2015b). Indeed, the point of these tasks is not to check whether a human is smart enough to be able to build a rule-based system to solve them, but to help analyze in which circumstances machine learning algorithms are smart enough to work, and where they fail. However, the Dialog State Tracking Challenge task (T6) contains some real interactions with users. This makes rule-based systems less straightforward and not so accurate (which is where we expect machine learning to be useful). We implemented a rule-based system for this task in the following way.
1605.07683#19
1605.07683#21
1605.07683
[ "1512.05742" ]
1605.07683#21
Learning End-to-End Goal-Oriented Dialog
We initialized a dialog state using the 3 relevant slots for this task: cuisine type, location and price range. Then we analyzed the training data and wrote a series of rules that ï¬ re for triggers like word matches, positions in the dialog, entity detections or dialog state, to output particular responses, API calls and/or update a dialog state. Responses are created by combining patterns extracted from the training set with entities detected in the previous turns or stored in the dialog state. Overall we built 28 rules and extracted 21 patterns. We optimized the choice of rules and their application priority (when needed) using the validation set, reaching a validation per-response accuracy of 40.7%. We did not build a rule-based system for Concierge data as it is even less constrained. 4.2 CLASSICAL INFORMATION RETRIEVAL MODELS Classical information retrieval (IR) models with no machine learning are standard baselines that often perform surprisingly well on dialog tasks (Isbell et al., 2000; Jafarpour et al., 2010; Ritter et al., 2011; Sordoni et al., 2015). We tried two standard variants: TF-IDF Match For each possible candidate response, we compute a matching score between the input and the response, and rank the responses by score. The score is the TFâ IDF weighted cosine similarity between the bag-of-words of the input and bag-of-words of the candidate response. We consider the case of the input being either only the last utterance or the entire conversation history, and choose the variant that works best on the validation set (typically the latter). Nearest Neighbor Using the input, we ï¬ nd the most similar conversation in the training set, and output the response from that example. In this case we consider the input to only be the last utterance, and consider the training set as (utterance, response) pairs that we select from. We use word overlap as the scoring method. When several responses are associated with the same utterance in training, we sort them by decreasing co-occurence frequency.
1605.07683#20
1605.07683#22
1605.07683
[ "1512.05742" ]
1605.07683#22
Learning End-to-End Goal-Oriented Dialog
4.3 SUPERVISED EMBEDDING MODELS A standard, often strong, baseline is to use supervised word embedding models for scoring (conversa- tion history, response) pairs. The embedding vectors are trained directly for this goal. In contrast, word embeddings are most well-known in the context of unsupervised training on raw text as in word2vec (Mikolov et al.|/2013). Such models are trained by learning to predict the middle word given the surrounding window of words, or vice-versa. However, given training data consisting of dialogs, a much more direct and strongly performing training procedure can be used: predict the next response given the previous conversation. In this setting a candidate reponse y is scored against the input x: f(x,y) = (Ax)! By, where A and B are d x V word embedding matrices, i.e. input and response are treated as summed bags-of-embeddings. We also consider the case of enforcing A = B, which sometimes works better, and optimize the choice on the validation set. The embeddings are trained with a margin ranking loss: f (x, y) > m + f (x, ¯y), with m the size of the margin, and we sample N negative candidate responses ¯y per example, and train with SGD. This approach has been previously shown to be very effective in a range of contexts (Bai et al., 2009;
1605.07683#21
1605.07683#23
1605.07683
[ "1512.05742" ]
1605.07683#23
Learning End-to-End Goal-Oriented Dialog
6 Published as a conference paper at ICLR 2017 Dodge et al., 2016). This method can be thought of as a classical information retrieval model, but where the matching function is learnt. 4.4 MEMORY NETWORKS Memory Networks (Weston et al., 2015a; Sukhbaatar et al., 2015) are a recent class of models that have been applied to a range of natural language processing tasks, including question answering (Weston et al., 2015b), language modeling (Sukhbaatar et al., 2015), and non-goal-oriented dialog (Dodge et al., 2016).
1605.07683#22
1605.07683#24
1605.07683
[ "1512.05742" ]
1605.07683#24
Learning End-to-End Goal-Oriented Dialog
By ï¬ rst writing and then iteratively reading from a memory component (using hops) that can store historical dialogs and short-term context to reason about the required response, they have been shown to perform well on those tasks and to outperform some other end-to-end architectures based on Recurrent Neural Networks. Hence, we chose them as end-to-end model baseline. We use the MemN2N architecture of Sukhbaatar et al. (2015), with an additional modiï¬ cation to leverage exact matches and types, described shortly. Apart from that addition, the main components of the model are (i) how it stores the conversation in memory, (ii) how it reads from the memory to reason about the response; and (iii) how it outputs the response. The details are given in Appendix A. 4.5 MATCH TYPE FEATURES TO DEAL WITH ENTITIES Words denoting entities have two important traits: 1) exact matches are usually more appropriate to deal with them than approximate matches, and 2) they frequently appear as OOV words (e.g., the name of a new restaurant). Both are a challenge for embedding-based methods. Firstly, embedding into a low dimensional space makes it hard to differentiate between exact word matches, and matches between words with similar meaning (Bai et al., 2009). While this can be a virtue (e.g. when using synonyms), it is often a ï¬ aw when dealing with entities (e.g. failure to differentiate between phone numbers since they have similar embeddings). Secondly, when a new word is used (e.g. the name of a new restaurant) not seen before in training, no word embedding is available, typically resulting in failure (Weston et al., 2015a). Both problems can be alleviated with match type features.
1605.07683#23
1605.07683#25
1605.07683
[ "1512.05742" ]
1605.07683#25
Learning End-to-End Goal-Oriented Dialog
Speciï¬ cally, we augment the vocabulary with 7 special words, one for each of the KB entity types (cuisine type, location, price range, party size, rating, phone number and address). For each type, the corresponding type word is added to the candidate representation if a word is found that appears 1) as a KB entity of that type, 2) in the candidate, and 3) in the input or memory. Any word that matches as a KB entity can be typed even if it has never been seen before in training dialogs. These features allow the model to learn to rely on type information using exact matching words cues when OOV entity embeddings are not known, as long as it has access to a KB with the OOV entities. We assess the impact of such features for TF-IDF Match, Supervised Embeddings and Memory Networks.
1605.07683#24
1605.07683#26
1605.07683
[ "1512.05742" ]
1605.07683#26
Learning End-to-End Goal-Oriented Dialog
# 5 EXPERIMENTS Our main results across all the models and tasks are given in Table 2 (extra results are also given in Table 10 of Appendix D). The ï¬ rst 5 rows show tasks T1-T5, and rows 6-10 show the same tasks in the out-of-vocabulary setting. Rows 11 and 12 give results for the Dialog State Tracking Challenge task (T6) and Concierge respectively. Columns 2-7 give the results of each method tried in terms of per-response accuracy and per-dialog accuracy, the latter given in parenthesis. Per-response accuracy counts the percentage of responses that are correct (i.e., the correct candidate is chosen out of all possible candidates). Per-dialog accuracy counts the percentage of dialogs where every response is correct. Ultimately, if only one response is incorrect this could result in a failed dialog, i.e. failure to achieve the goal (in this case, of achieving a restaurant booking).
1605.07683#25
1605.07683#27
1605.07683
[ "1512.05742" ]
1605.07683#27
Learning End-to-End Goal-Oriented Dialog
Note that we test Memory Networks (MemNNs) with and without match type features, the results are shown in the last two columns. The hyperparameters for all models were optimized on the validation sets; values for best performing models are given in Appendix C. The classical IR method TF-IDF Match performs the worst of all methods, and much worse than the Nearest Neighbor IR method, which is true on both the simulated tasks T1-T5 and on the real data of T6 and Concierge. Supplementing TF-IDF Match with match type features noticeably improves performance, which however still remains far behind Nearest Neighbor IR (adding bigrams to the
1605.07683#26
1605.07683#28
1605.07683
[ "1512.05742" ]
1605.07683#28
Learning End-to-End Goal-Oriented Dialog
7 Published as a conference paper at ICLR 2017 Table 2: Test results across all tasks and methods. For tasks T1-T5 results are given in the standard setup and the out-of-vocabulary (OOV) setup, where words (e.g. restaurant names) may not have been seen during training. Task T6 is the Dialog state tracking 2 task with real dialogs, and only has one setup. Best performing methods (or methods within 0.1% of best performing) are given in bold for the per-response accuracy metric, with the per-dialog accuracy given in parenthesis. (â ) For Concierge, an example is considered correctly answered if the correct response is ranked among the top 10 candidates by the bot, to accommodate the much larger range of semantically equivalent responses among candidates (see ex. in Tab. 7) . (â ) We did not implement MemNNs+match type on Concierge, because this method requires a KB and there is none associated with it. Task T1: Issuing API calls T2: Updating API calls T3: Displaying options T4: Providing information T5: Full dialogs T1(OOV): Issuing API calls T2(OOV): Updating API calls T3(OOV): Displaying options T4(OOV): Providing inform. T5(OOV): Full dialogs T6: Dialog state tracking 2 Concierge(â
1605.07683#27
1605.07683#29
1605.07683
[ "1512.05742" ]
1605.07683#29
Learning End-to-End Goal-Oriented Dialog
) Rule-based Systems 100 (100) 100 (100) 100 (100) 100 (100) 100 (100) 100 (100) 100 (100) 100 (100) 100 (100) 100 (100) 33.3 (0) n/a TF-IDF Match no type 5.6 (0) 3.4 (0) 8.0 (0) 9.5 (0) 4.6 (0) 5.8 (0) 3.5 (0) 8.3 (0) 9.8 (0) 4.6 (0) 1.6 (0) 1.1 (0.2) + type 22.4 (0) 16.4 (0) 8.0 (0) 17.8 (0) 8.1 (0) 22.4 (0) 16.8 (0) 8.3 (0) 17.2 (0) 9.0 (0) 1.6 (0) n/a Nearest Neighbor 55.1 (0) 68.3 (0) 58.8 (0) 28.6 (0) 57.1 (0) 44.1 (0) 68.3 (0) 58.8 (0) 28.6 (0) 48.4 (0) 21.9 (0) 13.4 (0.5) Supervised Embeddings 100 (100) (0) 68.4 (0) 64.9 (0) 57.2 (0) 75.4 (0) 60.0 (0) 68.3 (0) 65.0 (0) 57.0 (0) 58.2 22.6 (0) 14.6 (0.5) Memory Networks no match type 99.9 (99.6) 100 (100) 74.9 (2.0) (3.0) 59.5 96.1 (49.4) 72.3 78.9 74.4 57.6 65.5 41.1 16.7 (0) (0) (0) (0) (0) (0) (1.2) + match type 100 (100) 98.3 (83.9) 74.9 (0) 100 (100) 93.4 (19.7) 96.5 (82.7) 94.5 (48.4) 75.2 (0) 100 (100) 77.7 (0) 41.0 (0) n/a(â
1605.07683#28
1605.07683#30
1605.07683
[ "1512.05742" ]
1605.07683#30
Learning End-to-End Goal-Oriented Dialog
) dictionary has no effect on performance). This is in sharp contrast to other recent results on data- driven non-goal directed conversations, e.g. over dialogs on Twitter (Ritter et al., 2011) or Reddit (Dodge et al., 2016), where it was found that TF-IDF Match outperforms Nearest Neighbor, as general conversations on a given subject typically share many words. We conjecture that the goal-oriented nature of the conversation means that the conversation moves forward more quickly, sharing fewer words per (input, response) pair, e.g. consider the example in Figure 1. Supervised embeddings outperform classical IR methods in general, indicating that learning mappings between words (via word embeddings) is important.
1605.07683#29
1605.07683#31
1605.07683
[ "1512.05742" ]
1605.07683#31
Learning End-to-End Goal-Oriented Dialog
However, only one task (T1, Issuing API calls) is completely successful. In the other tasks, some responses are correct, as shown by the per-response accuracy, however there is no dialog where the goal is actually achieved (i.e., the mean dialog- accuracy is 0). Typically the model can provide correct responses for greeting messages, asking to wait, making API calls and asking if there are any other options necessary. However, it fails to interpret the results of API calls to display options, provide information or update the calls with new information, resulting in most of its errors, even when match type features are provided.
1605.07683#30
1605.07683#32
1605.07683
[ "1512.05742" ]
1605.07683#32
Learning End-to-End Goal-Oriented Dialog
Memory Networks (without match type features) outperform classical IR and supervised embeddings across all of the tasks. They can solve the ï¬ rst two tasks (issuing and updating API calls) adequately. On the other tasks, they give improved results, but do not solve them. While the per-response accuracy is improved, the per-dialog accuracy is still close to 0 on T3 and T4. Some examples of predictions of the MemNN for T1-4 are given in Appendix B. On the OOV tasks again performance is improved, but this is all due to better performance on known words, as unknown words are simply not used without the match type features. As stated in Appendix C, optimal hyperparameters on several of the tasks involve 3 or 4 hops, indicating that iterative accessing and reasoning over the conversation helps, e.g. on T3 using 1 hop gives 64.8% while 2 hops yields 74.7%. Appendix B displays illustrative examples of Memory Networks predictions on T 1-4 and Concierge. Memory Networks with match type features give two performance gains over the same models without match type features: (i) T4 (providing information) becomes solvable because matches can be made to the results of the API call; and (ii) out-of-vocabulary results are signiï¬ cantly improved as well. Still, tasks T3 and T5 are still fail cases, performance drops slightly on T2 compared to not using match type features, and no relative improvement is observed on T6. Finally, note that matching words on its own is not enough, as evidenced by the poor performance of TF-IDF matching; this idea must be combined with types and the other properties of the MemNN model. Unsurprisingly, perfectly coded rule-based systems can solve the simulated tasks T1-T5 perfectly, whereas our machine learning methods cannot. However, it is not easy to build an effective rule-based
1605.07683#31
1605.07683#33
1605.07683
[ "1512.05742" ]
1605.07683#33
Learning End-to-End Goal-Oriented Dialog
8 Published as a conference paper at ICLR 2017 system when dealing with real language on real problems, and our rule based system is outperformed by MemNNs on the more realistic task T6. Overall, while the methods we tried made some inroads into these tasks, there are still many challenges left unsolved. Our best models can learn to track implicit dialog states and manipulate OOV words and symbols (T1-T2) to issue API calls and progress in conversations, but they are still unable to perfectly handle interpreting knowledge about entities (from returned API calls) to present results to the user, e.g. displaying options in T3. The improvement observed on the simulated tasks e.g. where MemNNs outperform supervised embeddings which in turn outperform IR methods, is also seen on the realistic data of T6 with similar relative gains. This is encouraging as it indicates that future work on breaking down, analysing and developing models over the simulated tasks should help in the real tasks as well.
1605.07683#32
1605.07683#34
1605.07683
[ "1512.05742" ]
1605.07683#34
Learning End-to-End Goal-Oriented Dialog
Results on Concierge conï¬ rm this observation: the pattern of relative performances of methods is the same on Concierge and on our series of tasks. This suggests that our synthetic data can indeed be used as an effective evaluation proxy. # 6 CONCLUSION We have introduced an open dataset and task set for evaluating end-to-end goal-oriented dialog learning methods in a systematic and controlled way. We hope this will help foster progress of end-to- end conversational agents because (i) existing measures of performance either prevent reproducibility (different Mechanical Turk jobs) or do not correlate well with human judgements (Liu et al., 2016); (ii) the breakdown in tasks will help focus research and development to improve the learning methods; and (iii) goal-oriented dialog has clear utility in real applications. We illustrated how to use the testbed using a variant of end-to-end Memory Networks, which prove an effective model on these tasks relative to other baselines, but are still lacking in some key areas.
1605.07683#33
1605.07683#35
1605.07683
[ "1512.05742" ]
1605.07683#35
Learning End-to-End Goal-Oriented Dialog
ACKNOWLEDGMENTS The authors would like to thank Martin Raison, Alex Lebrun and Laurent Landowski for their help with the Concierge data. # REFERENCES Bai, B., Weston, J., Grangier, D., Collobert, R., Sadamasa, K., Qi, Y., Chapelle, O., and Weinberger, K. (2009). Supervised semantic indexing. In Proceedings of ACM CIKM, pages 187â 196. ACM. Banchs, R. E. (2012).
1605.07683#34
1605.07683#36
1605.07683
[ "1512.05742" ]
1605.07683#36
Learning End-to-End Goal-Oriented Dialog
Movie-dic: a movie dialogue corpus for research and development. In Proceedings of the 50th Annual Meeting of the ACL. Chen, Y.-N., Hakkani-Tür, D., Tur, G., Gao, J., and Deng, L. (2016). End-to-end memory networks with knowledge carryover for multi-turn spoken language understanding. In Proceedings of Interspeech. Dahl, D. A., Bates, M., Brown, M., Fisher, W., Hunicke-Smith, K., Pallett, D., Pao, C., Rudnicky, A., and Shriberg, E. (1994). Expanding the scope of the atis task: The atis-3 corpus. In Proceedings of the workshop on Human Language Technology, pages 43â 48. Association for Computational Linguistics. Dodge, J., Gane, A., Zhang, X., Bordes, A., Chopra, S., Miller, A., Szlam, A., and Weston, J. (2016). Evaluating prerequisite qualities for learning end-to-end dialog systems. In Proc. of ICLR.
1605.07683#35
1605.07683#37
1605.07683
[ "1512.05742" ]
1605.07683#37
Learning End-to-End Goal-Oriented Dialog
GaÅ¡ic, M., Kim, D., Tsiakoulis, P., Breslin, C., Henderson, M., Szummer, M., Thomson, B., and Young, S. (2014). Incremental on-line adaptation of pomdp-based dialogue managers to extended domains. In Proceedings on InterSpeech. Henderson, M., Thomson, B., and Williams, J. (2014a). The second dialog state tracking challenge. In 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue, page 263. Henderson, M., Thomson, B., and Young, S. (2014b). Word-based dialog state tracking with recurrent neural networks. In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pages 292â
1605.07683#36
1605.07683#38
1605.07683
[ "1512.05742" ]
1605.07683#38
Learning End-to-End Goal-Oriented Dialog
299. Hixon, B., Clark, P., and Hajishirzi, H. (2015). Learning knowledge graphs for question answering through conversational dialog. In Proceedings of the the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Denver, Colorado, USA. 9 Published as a conference paper at ICLR 2017 Isbell, C. L., Kearns, M., Kormann, D., Singh, S., and Stone, P. (2000).
1605.07683#37
1605.07683#39
1605.07683
[ "1512.05742" ]
1605.07683#39
Learning End-to-End Goal-Oriented Dialog
Cobot in lambdamoo: A social statistics agent. In AAAI/IAAI, pages 36â 41. Jafarpour, S., Burges, C. J., and Ritter, A. (2010). Filter, rank, and transfer the knowledge: Learning to chat. Advances in Ranking, 10. Kim, S., Dâ Haro, L. F., Banchs, R. E., Williams, J. D., and Henderson, M. (2016).
1605.07683#38
1605.07683#40
1605.07683
[ "1512.05742" ]
1605.07683#40
Learning End-to-End Goal-Oriented Dialog
The fourth dialog state tracking challenge. In Proceedings of the 7th International Workshop on Spoken Dialogue Systems (IWSDS). Lemon, O., Georgila, K., Henderson, J., and Stuttle, M. (2006). An isu dialogue system exhibiting reinforcement In Proceedings of the 11th learning of dialogue policies: generic slot-ï¬ lling in the talk in-car system. Conference of the European Chapter of the ACL: Posters & Demonstrations, pages 119â 122. Liu, C.-W., Lowe, R., Serban, I. V., Noseworthy, M., Charlin, L., and Pineau, J. (2016).
1605.07683#39
1605.07683#41
1605.07683
[ "1512.05742" ]
1605.07683#41
Learning End-to-End Goal-Oriented Dialog
How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. arXiv preprint arXiv:1603.08023. Lowe, R., Pow, N., Serban, I., and Pineau, J. (2015). The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. arXiv preprint arXiv:1506.08909. Lowe, R., Serban, I. V., Noseworthy, M., Charlin, L., and Pineau, J. (2016). On the evaluation of dialogue systems with next utterance classiï¬ cation. arXiv preprint arXiv:1605.05414. Mikolov, T., Chen, K., Corrado, G., and Dean, J. (2013).
1605.07683#40
1605.07683#42
1605.07683
[ "1512.05742" ]
1605.07683#42
Learning End-to-End Goal-Oriented Dialog
Efï¬ cient estimation of word representations in vector space. arXiv:1301.3781. Pietquin, O. and Hastie, H. (2013). A survey on metrics for the evaluation of user simulations. The knowledge engineering review, 28(01), 59â 73. Ritter, A., Cherry, C., and Dolan, W. B. (2011). Data-driven response generation in social media. In Proceedings of the Conference on Empirical Methods in Natural Language Processing.
1605.07683#41
1605.07683#43
1605.07683
[ "1512.05742" ]
1605.07683#43
Learning End-to-End Goal-Oriented Dialog
Serban, I. V., Sordoni, A., Bengio, Y., Courville, A., and Pineau, J. (2015a). Building end-to-end dialogue systems using generative hierarchical neural network models. In Proc. of the AAAI Conference on Artiï¬ cial Intelligence. Serban, I. V., Lowe, R., Charlin, L., and Pineau, J. (2015b). A survey of available corpora for building data-driven dialogue systems. arXiv preprint arXiv:1512.05742. Shang, L., Lu, Z., and Li, H. (2015). Neural responding machine for short-text conversation. arXiv preprint arXiv:1503.02364. Sordoni, A., Galley, M., Auli, M., Brockett, C., Ji, Y., Mitchell, M., Nie, J.-Y., Gao, J., and Dolan, B. (2015). A neural network approach to context-sensitive generation of conversational responses. Proceedings of NAACL. Su, P.-H., Vandyke, D., Gasic, M., Kim, D., Mrksic, N., Wen, T.-H., and Young, S. (2015a). Learning from real users: Rating dialogue success with neural networks for reinforcement learning in spoken dialogue systems. arXiv preprint arXiv:1508.03386. Su, P.-H., Vandyke, D., Gasic, M., Mrksic, N., Wen, T.-H., and Young, S. (2015b). Reward shaping with recurrent neural networks for speeding up on-line policy learning in spoken dialogue systems. arXiv preprint arXiv:1508.03391. Sukhbaatar, S., Szlam, A., Weston, J., and Fergus, R. (2015). End-to-end memory networks. Proceedings of NIPS. Vinyals, O. and Le, Q. (2015). A neural conversational model. arXiv preprint arXiv:1506.05869. Wang, H., Lu, Z., Li, H., and Chen, E. (2013). A dataset for research on short-text conversations. In EMNLP.
1605.07683#42
1605.07683#44
1605.07683
[ "1512.05742" ]
1605.07683#44
Learning End-to-End Goal-Oriented Dialog
Wang, Z. and Lemon, O. (2013). A simple and generic belief tracking mechanism for the dialog state tracking challenge: On the believability of observed information. In Proceedings of the SIGDIAL 2013 Conference. Wen, T.-H., Gasic, M., Mrksic, N., Su, P.-H., Vandyke, D., and Young, S. (2015). Semantically conditioned lstm-based natural language generation for spoken dialogue systems. arXiv preprint arXiv:1508.01745. Weston, J., Chopra, S., and Bordes, A. (2015a). Memory networks. Proceedings of ICLR. Weston, J., Bordes, A., Chopra, S., and Mikolov, T. (2015b). Towards ai-complete question answering: a set of prerequisite toy tasks. arXiv preprint arXiv:1502.05698. Young, S., Gasic, M., Thomson, B., and Williams, J. D. (2013). Pomdp-based statistical spoken dialog systems: A review. Proceedings of the IEEE, 101(5), 1160â 1179. 10
1605.07683#43
1605.07683#45
1605.07683
[ "1512.05742" ]
1605.07683#45
Learning End-to-End Goal-Oriented Dialog
Published as a conference paper at ICLR 2017 # A MEMORY NETWORKS IMPLEMENTATION Storing and representing the conversation history As the model conducts a conversation with the user, at each time step t the previous utterance (from the user) and response (from the model) are appended to the memory. Hence, at any given time there are cu tâ 1 model responses stored (i.e. the entire conversation).2 The aim at time t is to thus choose the next response cr t . We train on existing full dialog transcripts, so at training time we know the upcoming utterance cr t and can use it as a training target. Following Dodge et al. (2016), we represent each utterance as a bag-of-words and in memory it is represented as a vector using the embedding matrix A, i.e. the memory is an array with entries: m = (AΦ(cu 1 ), AΦ(cr 1) . . . , AΦ(cu tâ 1), AΦ(cr tâ 1)) where Φ(·) maps the utterance to a bag of dimension V (the vocabulary), and A is a d à V matrix, where d is the embedding dimension. We retain the last user utterance cu t as the â inputâ to be used directly in the controller.
1605.07683#44
1605.07683#46
1605.07683
[ "1512.05742" ]
1605.07683#46
Learning End-to-End Goal-Oriented Dialog
The contents of each memory slot mi so far does not contain any information of which speaker spoke an utterance, and at what time during the conversation. We therefore encode both of those pieces of information in the mapping Φ by extending the vocabulary to contain T = 1000 extra â time featuresâ which encode the index i into the bag-of-words, and two more features that encode whether the utterance was spoken by the user or the model. Attention over the memory The last user utterance cj is embedded using the same matrix A giving q = A®(c}'), which can also be seen as the initial state of the controller. At this point the controller reads from the memory to find salient parts of the previous conversation that are relevant to producing a response. The match between g and the memories is computed by taking the inner product followed by a softmax: pi= Softmax(u' mi), giving a probability vector over the memories. The vector that is returned back to the controller is then computed by 0 = R 5°, pim; where R is ad x d square matrix. The controller state is then updated with g2 = o + q. The memory can be iteratively reread to look for additional pertinent information using the updated state of the controller gz instead of g, and in general using q;, on iteration h, with a fixed number of iterations V (termed N hops). Empirically we find improved performance on our tasks with up to 3 or 4 hops.
1605.07683#45
1605.07683#47
1605.07683
[ "1512.05742" ]
1605.07683#47
Learning End-to-End Goal-Oriented Dialog
Choosing the response The ï¬ nal prediction is then deï¬ ned as: a@ = Softmax(qn41'W®(y1),-..,¢n41| W®(yc)) where there are C candidate responses in y, and W is of dimension d à V . In our tasks the set y is a (large) set of candidate responses which includes all possible bot utterances and API calls. The entire model is trained using stochastic gradient descent (SGD), minimizing a standard cross-entropy loss between Ë
1605.07683#46
1605.07683#48
1605.07683
[ "1512.05742" ]
1605.07683#48
Learning End-to-End Goal-Oriented Dialog
a and the true label a. # B EXAMPLES OF PREDICTIONS OF A MEMORY NETWORK Tables 3, 4, 5 and 6 display examples of predictions of the best performing Memory Network on full dialogs, Task 5, (with 3 hops) on test examples of Tasks 1-4 along with the values of the attention over each memory for each hop (pi as deï¬ ned in Sec. A). This model does not use match type features. Then, Table 7 displays an example of prediction of the best performing Memory Network on Concierge (with 2 hops) on a test example along with the values of the attention over each memory for each hop. # C HYPERPARAMETERS Tables 8 and 9 respectively display the values of the hyperparameters of the best Supervised Embeddings and Memory Networks selected for each task. These models were selected using the best validation validation sets. # D ADDITIONAL RESULTS Table 10 provides results for additional variants of supervised embeddings, using either a dictionary that includes all bigrams to leverage some word order information, or match type features. On some tasks, supervised embeddings perform better when the last user utterance is used as sole input, without the full dialog history (see Table 8). When no history is used, we slightly adapt match type features to only record type: a special word corresponding to type T (e.g., phone, address, etc) is appended to the representation of a candidate if the 2API calls are stored as bot utterances cr i , and KB facts resulting from such calls as user utterances cu i .
1605.07683#47
1605.07683#49
1605.07683
[ "1512.05742" ]
1605.07683#49
Learning End-to-End Goal-Oriented Dialog
? API calls are stored as bot utterances câ , and KB facts resulting from such calls as user utterances c'â . 11 Published as a conference paper at ICLR 2017 Table 3: Task 1 (Issue API call) The model learns to direct its attention towards the 4 memories containing the information key to issue the API call. More hops help to strengthen this signal. <silence> is a special token used to indicate that the user did not speak at this turn â the model has to carry out the conversation with no additional input. Locutor Time User 1 Bot 2 User 3 Bot 4 User 5 Bot 6 User 7 Bot 8 User 9 Bot 10 User 11 12 Bot User input Correct answer Predicted answer Dialog History hi hello what can i help you with today may i have a table in paris iâ m on it <silence> any preference on a type of cuisine i love indian food how many people would be in your party we will be six which price range are looking for in a moderate price range please ok let me look into some options for you <silence> api_call indian paris six moderate api_call indian paris six moderate Hop #1 Hop #2 Hop #3 .016 .024 .137 .028 .099 .090 .188 .022 .144 .028 .213 .011 .054 .040 .099 .048 .181 .056 .159 .051 .129 .039 .128 .016 .020 .008 .141 .004 .037 .014 .238 .010 .114 .006 .405 .003 [Correct] Table 4: Task 2 (Update API call) Out of the multiple memories from the current dialog, the model correctly focuses on the 2 important pieces: the original API call and the utterance giving the update. Hop #1 Hop #2 Hop #3 .072 .012 .042 .023 .070 .006 .051 .043 .095 .042 .069 .113 .311 .007 .013 .006 .061 .026 .087 .026 .081 .025 .059 .038 .080 .025 .127 .062 .188 .016 .028 .011 .040 .001 .012 .001 .055 .001 .018 .004 .096 .003 .032 .043 .683 .001 .007 .000
1605.07683#48
1605.07683#50
1605.07683
[ "1512.05742" ]
1605.07683#50
Learning End-to-End Goal-Oriented Dialog
12 Published as a conference paper at ICLR 2017 Table 5: Task 3 (Displaying options) The model knows it has to display options but the attention is wrong: it should attend on the ratings to select the best option (with highest rating). It cannot learn that properly and match type features do not help. It is correct here by luck, the task is not solved overall (see Tab. 2). We do not show all memories in the table, only those with meaningful attention. Time 14 15 20 21 23 24 25 26 27 30 31 32 33 37 38 39 40 User input Correct answer Predicted answer Locutor Bot User Bot User User User User User User User User User User User User User User Dialog history api_call indian paris six moderate instead could it be with french food api_call french paris six moderate resto_1 r_phone rest_1_phone resto_1 r_cuisine french resto_1 r_location paris resto_1 r_number six resto_1 r_price moderate resto_1 r_rating 6 resto_2 r_cuisine french resto_2 r_location paris resto_2 r_number six resto_2 r_price moderate resto_3 r_cuisine french resto_3 r_location paris resto_3 r_number six resto_3 r_price moderate <silence> what do you think of this option: resto_1 what do you think of this option: resto_1 Hop #1 Hop #2 Hop #3 .000 .103 .000 .004 .005 .292 .298 .090 .002 .007 .081 .012 .009 .001 .016 .022 .015 .012 .067 .012 .018 .029 .060 .050 .060 .016 .031 .040 .020 .029 .014 .028 .024 .039 .000 .147 .000 .000 .000 .094 .745 .002 .000 .000 .004 .000 .000 .000 .001 .004 .001 [Correct] Table 6: Task 4 (Providing extra-information) The model knows it must display a phone or an address, but, as explained in Section A the embeddings mix up the information and make it hard to distinguish between different phone numbers or addresses, making answering correctly very hard.
1605.07683#49
1605.07683#51
1605.07683
[ "1512.05742" ]
1605.07683#51
Learning End-to-End Goal-Oriented Dialog
As shown in the results of Tab. 2, this problem can be solved by adding match type features, that allow to emphasize entities actually appearing in the history. The attention is globally wrong here. Time Locutor 14 Bot 15 User 20 Bot 21 User 22 User 23 User 24 User 25 User 26 User 27 User 28 User 29 User 31 User 32 User 33 User 35 User 36 User 37 User 39 User 40 User 42 Bot 43 User Bot 44 User input Correct answer Predicted answer Dialog history api_call indian paris six moderate instead could it be with french food api_call french paris six moderate resto_1 r_phone resto_1_phone resto_1 r_address resto_1_address resto_1 r_cuisine french resto_1 r_location paris resto_1 r_number six resto_1 r_price moderate resto_1 r_rating 6 resto_2 r_phone resto_2_phone resto_2 r_address resto_2_address resto_2 r_location paris resto_2 r_number six resto_2 r_price moderate resto_3 r_phone resto_3_phone resto_3 r_address resto_3_address resto_3 r_location paris resto_3 r_number six resto_3 r_price moderate what do you think of this option: resto_1 letâ s do it great let me do the reservation do you have its address here it is resto_1_address here it is: resto_8_address Hop #1 Hop #2 Hop #3 .000 .011 .000 .005 .004 .003 .091 .078 .225 .006 .009 .004 .176 .126 .090 .001 .002 .028 .013 .008 .001 .004 .000 .006 .024 .005 .011 .018 .018 .068 .086 .070 .014 .015 .014 .075 .100 .038 .004 .005 .028 .039 .018 .074 .032 .003 .000 .007 .001 .004 .001 .001 .108 .020 .369 .008 .006 .001 .193 .026 .167 .001 .001 .026 .002 .013 .000 .001 .000 [Incorrect] 13
1605.07683#50
1605.07683#52
1605.07683
[ "1512.05742" ]
1605.07683#52
Learning End-to-End Goal-Oriented Dialog
Published as a conference paper at ICLR 2017 Table 7: Concierge Data The model is also able to learn from human-human dialogs. <person>, <org>, <number> and <date> are special tokens used to anonymize the data. We report the top 5 answers predicted by the model. They are all semantically equivalent. Note that the utterances, while all produced by humans, are not perfect English ("rservation", "Iâ ll check into it") Time 1 2 3 4 5 Locutor User User User User Bot User input Correct answer Pred. answer #1 Pred. answer #2 Pred. answer #3 Pred. answer #4 Pred. answer #5 Dialog History hey concierge could you check if i can get a rservation at <org> <date> for brunch <number> people <silence> hi <person> unfortunately <org> is fully booked for <date> and thereâ s <number> people on the waiting list whenâ s the earliest availability iâ ll check iâ m on it iâ ll ï¬ nd out iâ ll take a look iâ ll check iâ ll check into it Hop #1 Hop #2 .189 .209 .197 .187 .225 .095 .178 .142 .167 .410 [Incorrect] [Incorrect] [Incorrect] [Correct] [Incorrect]
1605.07683#51
1605.07683#53
1605.07683
[ "1512.05742" ]
1605.07683#53
Learning End-to-End Goal-Oriented Dialog
Table 8: Hyperparameters of Supervised Embeddings. When Use History is True, the whole conversation history is concatenated with the latest user utterance to create the input. If False, only the latest utterance is used as input. Task Task 1 Task 2 Task 3 Task 4 Task 5 Task 6 Concierge Learning Rate Margin m Embedding Dim d Negative Cand. N Use History 0.01 0.01 0.01 0.001 0.01 0.001 0.001 0.01 0.01 0.1 0.1 0.01 0.01 0.1 32 128 128 128 32 128 64 100 100 1000 1000 100 100 100 True False False False True False False Table 9: Hyperparameters of Memory Networks. The longer and more complex the dialogs are, the more hops are needed. Task Task 1 Task 2 Task 3 Task 4 Task 5 Task 6 Concierge Learning Rate Margin m Embedding Dim d Negative Cand. N Nb Hops 0.01 0.01 0.01 0.01 0.01 0.01 0.001 0.1 0.1 0.1 0.1 0.1 0.1 0.1 128 32 32 128 32 128 128 100 100 100 100 100 100 100 1 1 3 2 3 4 2 candidate contains a word that appears in the knowledge base as an entity of type T , regardless of whether the same word appeared earlier in the conversation. As seen on Table 10, match type features improve performance on out-of-vocabulary tasks 1 and 5, bringing it closer to that of Memory Networks without match type features, but still quite lagging Memory Networks with match type features. Bigrams slightly hurt rather than help performance, except in Task 5 in the standard in-vocabulary setup (performance is lower in the OOV setup).
1605.07683#52
1605.07683#54
1605.07683
[ "1512.05742" ]
1605.07683#54
Learning End-to-End Goal-Oriented Dialog
14 Published as a conference paper at ICLR 2017 Table 10: Test results across all tasks and methods. For tasks T1-T5 results are given in the standard setup and the out-of-vocabulary (OOV) setup, where words (e.g. restaurant names) may not have been seen during training. Task T6 is the Dialog state tracking 2 task with real dialogs, and only has one setup. Best performing methods (or methods within 0.1% of best performing) are given in bold for the per-response accuracy metric, with the per-dialog accuracy given in parenthesis. Task T1: Issuing API calls T2: Updating API calls T3: Displaying options T4: Providing information T5: Full dialogs T1(OOV): Issuing API calls T2(OOV): Updating API calls T3(OOV): Displaying options T4(OOV): Providing inform. T5(OOV): Full dialogs T6:
1605.07683#53
1605.07683#55
1605.07683
[ "1512.05742" ]
1605.07683#55
Learning End-to-End Goal-Oriented Dialog
Dialog state tracking 2 Supervised Embeddings + match type no bigram no match type no bigram (100) (0) (0) (0) (0) (0) (0) (0) (0) (0) (0) + bigrams no match type 98.6 (92.4) 68.3 64.9 57.3 83.4 58.8 68.3 62.1 57.0 50.4 21.8 100 68.4 64.9 57.2 75.4 60.0 68.3 65.0 57.0 58.2 22.6 83.2 68.4 64.9 57.2 76.2 67.2 68.3 65.0 57.1 64.4 22.1 (0) (0) (0) (0) (0) (0) (0) (0) (0) (0) (0) (0) (0) (0) (0) (0) (0) (0) (0) (0) (0) Memory Networks no match type + match type 99.9 (99.6) 100 (100) 74.9 (2.0) (3.0) 59.5 96.1 (49.4) 72.3 78.9 74.4 57.6 65.5 41.1 (0) (0) (0) (0) (0) (0) 100 (100) 98.3 (83.9) 74.9 (0) 100 (100) 93.4 (19.7) 96.5 (82.7) 94.5 (48.4) 75.2 (0) 100 (100) 77.7 (0) 41.0 (0)
1605.07683#54
1605.07683#56
1605.07683
[ "1512.05742" ]
1605.07683#56
Learning End-to-End Goal-Oriented Dialog
15
1605.07683#55
1605.07683
[ "1512.05742" ]
1605.06431#0
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
6 1 0 2 t c O 7 2 ] V C . s c [ 2 v 1 3 4 6 0 . 5 0 6 1 : v i X r a # Residual Networks Behave Like Ensembles of Relatively Shallow Networks Michael Wilber Department of Computer Science & Cornell Tech Cornell University {av443, mjw285, sjb344}@cornell.edu # Abstract In this work we propose a novel interpretation of residual networks showing that they can be seen as a collection of many paths of differing length. Moreover, residual networks seem to enable very deep networks by leveraging only the short paths during training. To support this observation, we rewrite residual networks as an explicit collection of paths. Unlike traditional models, paths through residual networks vary in length. Further, a lesion study reveals that these paths show ensemble-like behavior in the sense that they do not strongly depend on each other. Finally, and most surprising, most paths are shorter than one might expect, and only the short paths are needed during training, as longer paths do not contribute any gradient. For example, most of the gradient in a residual network with 110 layers comes from paths that are only 10-34 layers deep. Our results reveal one of the key characteristics that seem to enable the training of very deep networks: Residual networks avoid the vanishing gradient problem by introducing short paths which can carry gradient throughout the extent of very deep networks.
1605.06431#1
1605.06431
[ "1603.09382" ]
1605.06431#1
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
# Introduction Most modern computer vision systems follow a familiar architecture, processing inputs from low- level features up to task speciï¬ c high-level features. Recently proposed residual networks [5, 6] challenge this conventional view in three ways. First, they introduce identity skip-connections that bypass residual layers, allowing data to ï¬ ow from any layers directly to any subsequent layers. This is in stark contrast to the traditional strictly sequential pipeline. Second, skip connections give rise to networks that are two orders of magnitude deeper than previous models, with as many as 1202 layers. This is contrary to architectures like AlexNet [13] and even biological systems [17] that can capture complex concepts within half a dozen layers.1 Third, in initial experiments, we observe that removing single layers from residual networks at test time does not noticeably affect their performance. This is surprising because removing a layer from a traditional architecture such as VGG [18] leads to a dramatic loss in performance.
1605.06431#0
1605.06431#2
1605.06431
[ "1603.09382" ]
1605.06431#2
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
In this work we investigate the impact of these differences. To address the inï¬ uence of identity skip- connections, we introduce the unraveled view. This novel representation shows residual networks can be viewed as a collection of many paths instead of a single deep network. Further, the perceived resilience of residual networks raises the question whether the paths are dependent on each other or whether they exhibit a degree of redundancy. To ï¬ nd out, we perform a lesion study. The results show ensemble-like behavior in the sense that removing paths from residual networks by deleting layers or corrupting paths by reordering layers only has a modest and smooth impact on performance. Finally, we investigate the depth of residual networks. Unlike traditional models, paths through residual networks vary in length. The distribution of path lengths follows a binomial distribution, meaning
1605.06431#1
1605.06431#3
1605.06431
[ "1603.09382" ]
1605.06431#3
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
1Making the common assumption that a layer in a neural network corresponds to a cortical area. that the majority of paths in a network with 110 layers are only about 55 layers deep. Moreover, we show most gradient during training comes from paths that are even shorter, i.e., 10-34 layers deep. This reveals a tension. On the one hand, residual network performance improves with adding more and more layers [6]. However, on the other hand, residual networks can be seen as collections of many paths and the only effective paths are relatively shallow.
1605.06431#2
1605.06431#4
1605.06431
[ "1603.09382" ]
1605.06431#4
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
Our results could provide a ï¬ rst explanation: residual networks do not resolve the vanishing gradient problem by preserving gradient ï¬ ow throughout the entire depth of the network. Rather, they enable very deep networks by shortening the effective paths. For now, short paths still seem necessary to train very deep networks. In this paper we make the following contributions: â ¢ We introduce the unraveled view, which illustrates that residual networks can be viewed as a collection of many paths, instead of a single ultra-deep network. â ¢ We perform a lesion study to show that these paths do not strongly depend on each other, even though they are trained jointly. Moreover, they exhibit ensemble-like behavior in the sense that their performance smoothly correlates with the number of valid paths.
1605.06431#3
1605.06431#5
1605.06431
[ "1603.09382" ]
1605.06431#5
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
â ¢ We investigate the gradient ï¬ ow through residual networks, revealing that only the short paths contribute gradient during training. Deep paths are not required during training. # 2 Related Work The sequential and hierarchical computer vision pipeline Visual processing has long been un- derstood to follow a hierarchical process from the analysis of simple to complex features. This formalism is based on the discovery of the receptive ï¬ eld [10], which characterizes the visual system as a hierarchical and feedforward system. Neurons in early visual areas have small receptive ï¬ elds and are sensitive to basic visual features, e.g., edges and bars. Neurons in deeper layers of the hierarchy capture basic shapes, and even deeper neurons respond to full objects. This organization has been widely adopted in the computer vision and machine learning literature, from early neural networks such as the Neocognitron [4] and the traditional hand-crafted feature pipeline of Malik and Perona [15] to convolutional neural networks [13, 14]. The recent strong results of very deep neural networks [18, 20] led to the general perception that it is the depth of neural networks that govern their expressive power and performance. In this work, we show that residual networks do not necessarily follow this tradition. Residual networks [5, 6] are neural networks in which each layer consists of a residual module fi and a skip connection2 bypassing fi. Since layers in residual networks can comprise multiple convolutional layers, we refer to them as residual blocks in the remainder of this paper. For clarity of notation, we omit the initial pre-processing and ï¬
1605.06431#4
1605.06431#6
1605.06431
[ "1603.09382" ]
1605.06431#6
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
nal classiï¬ cation steps. With yiâ 1 as is input, the output of the ith block is recursively deï¬ ned as yi â ¡ fi(yiâ 1) + yiâ 1, (1) where fi(x) is some sequence of convolutions, batch normalization [11], and Rectiï¬ ed Linear Units (ReLU) as nonlinearities. Figure 1 (a) shows a schematic view of this architecture. In the most recent formulation of residual networks [6], fi(x) is deï¬
1605.06431#5
1605.06431#7
1605.06431
[ "1603.09382" ]
1605.06431#7
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
ned by Ale) = Wi-o(B(Wi-o(B(a)))). @) where W; and W/ are weight matrices, - denotes convolution, B(x) is batch normalization and o(x) = max(z,0). Other formulations are typically composed of the same operations, but may differ in their order. The idea of branching paths in neural networks is not new. For example, in the regime of convolutional neural networks, models based on inception modules [20] were among the ï¬ rst to arrange layers in blocks with parallel paths rather than a strict sequential order. We choose residual networks for this study because of their simple design principle. Highway networks Residual networks can be viewed as a special case of highway networks [19]. The output of each layer of a highway network is deï¬
1605.06431#6
1605.06431#8
1605.06431
[ "1603.09382" ]
1605.06431#8
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
ned as yi+1 â ¡ fi+1(yi) · ti+1(yi) + yi · (1 â ti+1(yi)) (3) 2We only consider identity skip connections, but this framework readily generalizes to more complex projection skip connections when downsampling is required. 2 = (a) Conventional 3-block residual network (b) Unraveled view of (a) Figure 1: Residual Networks are conventionally shown as (a), which is a natural representation of Equation (1). When we expand this formulation to Equation (6), we obtain an unraveled view of a 3-block residual network (b). Circular nodes represent additions. From this view, it is apparent that residual networks have O(2n) implicit paths connecting input and output and that adding a block doubles the number of paths. This follows the same structure as Equation (1). Highway networks also contain residual modules and skip connections that bypass them. However, the output of each path is attenuated by a gating function t, which has learned parameters and is dependent on its input. Highway networks are equivalent to residual networks when ti(·) = 0.5, in which case data ï¬ ows equally through both paths. Given an omnipotent solver, highway networks could learn whether each residual module should affect the data. This introduces more parameters and more complexity. Investigating neural networks Several investigative studies seek to better understand convolutional neural networks. For example, Zeiler and Fergus [23] visualize convolutional ï¬ lters to unveil the concepts learned by individual neurons. Further, Szegedy et al. [21] investigate the function learned by neural networks and how small changes in the input called adversarial examples can lead to large changes in the output. Within this stream of research, the closest study to our work is from Yosinski et al. [22], which performs lesion studies on AlexNet. They discover that early layers exhibit little co-adaptation and later layers have more co-adaptation. These papers, along with ours, have the common thread of exploring speciï¬ c aspects of neural network performance. In our study, we focus our investigation on structural properties of neural networks. Ensembling Since the early days of neural networks, researchers have used simple ensembling techniques to improve performance.
1605.06431#7
1605.06431#9
1605.06431
[ "1603.09382" ]
1605.06431#9
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
Though boosting has been used in the past [16], one simple approach is to arrange a committee [3] of neural networks in a simple voting scheme, where the ï¬ nal output predictions are averaged. Top performers in several competitions use this technique almost as an afterthought [6, 13, 18]. Generally, one key characteristic of ensembles is their smooth performance with respect to the number of members. In particular, the performance increase from additional ensemble members gets smaller with increasing ensemble size. Even though they are not strict ensembles, we show that residual networks behave similarly. Dropout Hinton et al. [7] show that dropping out individual neurons during training leads to a network that is equivalent to averaging over an ensemble of exponentially many networks. Similar in spirit, stochastic depth [9] trains an ensemble of networks by dropping out entire layers during training. In this work, we show that one does not need a special training strategy such as stochastic depth to drop out layers. Entire layers can be removed from plain residual networks without impacting performance, indicating that they do not strongly depend on each other. # 3 The unraveled view of residual networks To better understand residual networks, we introduce a formulation that makes it easier to reason about their recursive nature. Consider a residual network with three building blocks from input y0 to output y3. Equation (1) gives a recursive deï¬ nition of residual networks. The output of each stage is based on the combination of two subterms. We can make the shared structure of the residual network apparent by unrolling the recursion into an exponential number of nested terms, expanding one layer
1605.06431#8
1605.06431#10
1605.06431
[ "1603.09382" ]
1605.06431#10
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
3 (a) Deleting f2 from unraveled view (b) Ordinary feedforward network Figure 2: Deleting a layer in residual networks at test time (a) is equivalent to zeroing half of the paths. In ordinary feed-forward networks (b) such as VGG or AlexNet, deleting individual layers alters the only viable path from input to output. at each substitution step: y3 = y2 + f3(y2) (4) (5) [yi + fo(y)] + faQy + fo(yr)) [yo + fi(yo) + fo(yo + filyo))] + [yo + fi(yo) + fo(yo + filyo))] + fa(yo + fi(yo) + fo(yo + fr (yo))) (6) We illustrate this expression tree graphically in Figure 1 (b). With subscripts in the function modules indicating weight sharing, this graph is equivalent to the original formulation of residual networks. The graph makes clear that data ï¬ ows along many paths from input to output. Each path is a unique conï¬ guration of which residual module to enter and which to skip. Conceivably, each unique path through the network can be indexed by a binary code b â {0, 1}n where bi = 1 iff the input ï¬ ows through residual module fi and 0 if fi is skipped. It follows that residual networks have 2n paths connecting input to output layers. In the classical visual hierarchy, each layer of processing depends only on the output of the previous layer. Residual networks cannot strictly follow this pattern because of their inherent structure. Each module fi(·) in the residual network is fed data from a mixture of 2iâ 1 different distributions generated from every possible conï¬ guration of the previous i â 1 residual modules. Compare this to a strictly sequential network such as VGG or AlexNet, depicted conceptually in Figure 2 (b). In these networks, input always ï¬ ows from the ï¬ rst layer straight through to the last in a single path. Written out, the output of a three-layer feed-forward network is 3 = f F F yF F 3 (f F F 2 (f F F 1 (y0))) (7) (x) is typically a convolution followed by batch normalization and ReLU.
1605.06431#9
1605.06431#11
1605.06431
[ "1603.09382" ]
1605.06431#11
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
In these is only fed data from a single path conï¬ guration, the output of f F F It is worthwhile to note that ordinary feed-forward neural networks can also be â unraveledâ using the above thought process at the level of individual neurons rather than layers. This renders the network as a collection of different paths, where each path is a unique conï¬ guration of neurons from each layer connecting input to output. Thus, all paths through ordinary neural networks are of the same length. However, paths in residual networks have varying length. Further, each path in a residual network goes through a different subset of layers.
1605.06431#10
1605.06431#12
1605.06431
[ "1603.09382" ]
1605.06431#12
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
Based on these observations, we formulate the following questions and address them in our experi- ments below. Are the paths in residual networks dependent on each other or do they exhibit a degree of redundancy? If the paths do not strongly depend on each other, do they behave like an ensemble? Do paths of varying lengths impact the network differently? # 4 Lesion study In this section, we use three lesion studies to show that paths in residual networks do not strongly depend on each other and that they behave like an ensemble. All experiments are performed at test
1605.06431#11
1605.06431#13
1605.06431
[ "1603.09382" ]
1605.06431#13
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
4 # Test classification error Test error when dropping any single block from residual network vs. VGG on CIFAR-10 n~vVV residual network v2, 110 laye! VGG network, 15 layers residual network baseline VGG network baseline ° 10 20 30 40 50 dropped layer index Top-1 error when dropping any single block from 200-layer residual network on ImageNet â residual network v2, 200 laye! residual network baseline top 1 error 0.0 0 10 20 30 40 50 60 dropped layer index
1605.06431#12
1605.06431#14
1605.06431
[ "1603.09382" ]
1605.06431#14
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
Figure 4: Results when dropping individual blocks from residual networks trained on Ima- geNet are similar to CIFAR results. However, downsampling layers tend to have more impact on ImageNet. Figure 3: Deleting individual layers from VGG and a residual network on CIFAR-10. VGG per- formance drops to random chance when any one of its layers is deleted, but deleting individual modules from residual networks has a minimal impact on performance. Removing downsam- pling modules has a slightly higher impact.
1605.06431#13
1605.06431#15
1605.06431
[ "1603.09382" ]
1605.06431#15
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
time on CIFAR-10 [12]. Experiments on ImageNet [2] show comparable results. We train residual networks with the standard training strategy, dataset augmentation, and learning rate policy, [6]. For our CIFAR-10 experiments, we train a 110-layer (54-module) residual network with modules of the â pre-activationâ type which contain batch normalization as ï¬ rst step. For ImageNet we use 200 layers (66 modules). It is important to note that we did not use any special training strategy to adapt the network. In particular, we did not use any perturbations such as stochastic depth during training.
1605.06431#14
1605.06431#16
1605.06431
[ "1603.09382" ]
1605.06431#16
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
# 4.1 Experiment: Deleting individual layers from neural networks at test time As a motivating experiment, we will show that not all transformations within a residual network are necessary by deleting individual modules from the neural network after it has been fully trained. To do so, we remove the residual module from a single building block, leaving the skip connection (or downsampling projection, if any) untouched. That is, we change y; = yi-1 + fi(yiâ 1) to yf = yi-1- We can measure the importance of each building block by varying which residual module we remove.
1605.06431#15
1605.06431#17
1605.06431
[ "1603.09382" ]
1605.06431#17
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
To compare to conventional convolutional neural networks, we train a VGG network with 15 layers, setting the number of channels to 128 for all layers to allow the removal of any layer. It is unclear whether any neural network can withstand such a drastic change to the model structure. We expect them to break because dropping any layer drastically changes the input distribution of all subsequent layers. The results are shown in Figure 3. As expected, deleting any layer in VGG reduces performance to chance levels. Surprisingly, this is not the case for residual networks. Removing downsampling blocks does have a modest impact on performance (peaks in Figure 3 correspond to downsampling building blocks), but no other block removal lead to a noticeable change. This result shows that to some extent, the structure of a residual network can be changed at runtime without affecting performance. Experiments on ImageNet show comparable results, as seen in Figure 4. Why are residual networks resilient to dropping layers but VGG is not? Expressing residual networks in the unraveled view provides a ï¬
1605.06431#16
1605.06431#18
1605.06431
[ "1603.09382" ]
1605.06431#18
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
rst insight. It shows that residual networks can be seen as a collection of many paths. As illustrated in Figure 2 (a), when a layer is removed, the number of paths is reduced from 2n to 2nâ 1, leaving half the number of paths valid. VGG only contains a single usable path from input to output. Thus, when a single layer is removed, the only viable path is corrupted. This result suggests that paths in a residual network do not strongly depend on each other although they are trained jointly.
1605.06431#17
1605.06431#19
1605.06431
[ "1603.09382" ]
1605.06431#19
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
# 4.2 Experiment: Deleting many modules from residual networks at test-time Having shown that paths do not strongly depend on each other, we investigate whether the collection of paths shows ensemble-like behavior. One key characteristic of ensembles is that their performance 5 Error when deleting layers 09 09 -4 08 Error when permuting layers 7 = 08 i = 1 1 1 07 07 il L 03 1 1 1 1 06 1 1 1 1 1 1 1 1 05 8 aise 0.0 }----4 }---4 â n 1 : ! t + 02 T f r oF od riot 0.0 123 45 6 7 8 9 1011121314151617181920 1.0 0.98 0.96 094 092 09 0.88 0.86 0.84 Number of layers deleted Kendall Tau correlation
1605.06431#18
1605.06431#20
1605.06431
[ "1603.09382" ]
1605.06431#20
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
(a) (b) Figure 5: (a) Error increases smoothly when randomly deleting several modules from a residual network. (b) Error also increases smoothly when re-ordering a residual network by shufï¬ ing building blocks. The degree of reordering is measured by the Kendall Tau correlation coefï¬ cient. These results are similar to what one would expect from ensembles. depends smoothly on the number of members. If the collection of paths were to behave like an ensemble, we would expect test-time performance of residual networks to smoothly correlate with the number of valid paths. This is indeed what we observe: deleting increasing numbers of residual modules, increases error smoothly, Figure 5 (a). This implies residual networks behave like ensembles. When deleting k residual modules from a network originally of length n, the number of valid paths decreases to O(2nâ
1605.06431#19
1605.06431#21
1605.06431
[ "1603.09382" ]
1605.06431#21
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
k). For example, the original network started with 54 building blocks, so deleting 10 blocks leaves 244 paths. Though the collection is now a factor of roughly 10â 6 of its original size, there are still many valid paths and error remains around 0.2. # 4.3 Experiment: Reordering modules in residual networks at test-time Our previous experiments were only about dropping layers, which have the effect of removing paths from the network. In this experiment, we consider changing the structure of the network by re-ordering the building blocks. This has the effect of removing some paths and inserting new paths that have never been seen by the network during training. In particular, it moves high-level transformations before low-level transformations. To re-order the network, we swap k randomly sampled pairs of building blocks with compatible dimensionality, ignoring modules that perform downsampling.
1605.06431#20
1605.06431#22
1605.06431
[ "1603.09382" ]
1605.06431#22
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
We graph error with respect to the Kendall Tau rank correlation coefï¬ cient which measures the amount of corruption. The results are shown in Figure 5 (b). As corruption increases, the error smoothly increases as well. This result is surprising because it suggests that residual networks can be reconï¬ gured to some extent at runtime. # 5 The importance of short paths in residual networks Now that we have seen that there are many paths through residual networks and that they do not necessarily depend on each other, we investigate their characteristics. Distribution of path lengths Not all paths through residual networks are of the same length. For example, there is precisely one path that goes through all modules and n paths that go only through a single module. From this reasoning, the distribution of all possible path lengths through a residual network follows a Binomial distribution. Thus, we know that the path lengths are closely centered around the mean of n/2. Figure 6 (a) shows the path length distribution for a residual network with 54 modules; more than 95% of paths go through 19 to 35 modules. Vanishing gradients in residual networks Generally, data ï¬ ows along all paths in residual networks. However, not all paths carry the same amount of gradient. In particular, the length of the paths through the network affects the gradient magnitude during backpropagation [1, 8]. To empirically investigate the effect of vanishing gradients on residual networks we perform the following experiment. Starting from a trained network with 54 blocks, we sample individual paths of a certain length and measure the norm of the gradient that arrives at the input. To sample a path of length k, we ï¬ rst feed a batch forward through the whole network. During the backward pass, we randomly sample k residual
1605.06431#21
1605.06431#23
1605.06431
[ "1603.09382" ]
1605.06431#23
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
6 (a) (c) (b) Figure 6: How much gradient do the paths of different lengths contribute in a residual network? To ï¬ nd out, we ï¬ rst show the distribution of all possible path lengths (a). This follows a Binomial distribution. Second, we record how much gradient is induced on the ï¬ rst layer of the network through paths of varying length (b), which appears to decay roughly exponentially with the number of modules the gradient passes through. Finally, we can multiply these two functions (c) to show how much gradient comes from all paths of a certain length. Though there are many paths of medium length, paths longer than â ¼20 modules are generally too long to contribute noticeable gradient during training. This suggests that the effective paths in residual networks are relatively shallow.
1605.06431#22
1605.06431#24
1605.06431
[ "1603.09382" ]
1605.06431#24
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
blocks. For those k blocks, we only propagate through the residual module; for the remaining n â k blocks, we only propagate through the skip connection. Thus, we only measure gradients that ï¬ ow through the single path of length k. We sample 1,000 measurements for each length k using random batches from the training set. The results show that the gradient magnitude of a path decreases exponentially with the number of modules it went through in the backward pass, Figure 6 (b). The effective paths in residual networks are relatively shallow Finally, we can use these results to deduce whether shorter or longer paths contribute most of the gradient during training.
1605.06431#23
1605.06431#25
1605.06431
[ "1603.09382" ]
1605.06431#25
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
To ï¬ nd the total gradient magnitude contributed by paths of each length, we multiply the frequency of each path length with the expected gradient magnitude. The result is shown in Figure 6 (c). Surprisingly, almost all of the gradient updates during training come from paths between 5 and 17 modules long. These are the effective paths, even though they constitute only 0.45% of all paths through this network. Moreover, in comparison to the total length of the network, the effective paths are relatively shallow. To validate this result, we retrain a residual network from scratch that only sees the effective paths during training. This ensures that no long path is ever used. If the retrained model is able to perform competitively compared to training the full network, we know that long paths in residual networks are not needed during training. We achieve this by only training a subset of the modules during each mini batch. In particular, we choose the number of modules such that the distribution of paths during training aligns with the distribution of the effective paths in the whole network. For the network with 54 modules, this means we sample exactly 23 modules during each training batch. Then, the path lengths during training are centered around 11.5 modules, well aligned with the effective paths. In our experiment, the network trained only with the effective paths achieves a 5.96% error rate, whereas the full model achieves a 6.10% error rate.
1605.06431#24
1605.06431#26
1605.06431
[ "1603.09382" ]
1605.06431#26
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
There is no statistically signiï¬ cant difference. This demonstrates that indeed only the effective paths are needed. # 6 Discussion Removing residual modules mostly removes long paths Deleting a module from a residual network mainly removes the long paths through the network. In particular, when deleting d residual modules from a network of length n, the fraction of paths remaining per path length x is given by rw) (7) (8) fraction of remaining paths of length x = Figure 7 illustrates the fraction of remaining paths after deleting 1, 10 and 20 modules from a 54 module network. It becomes apparent that the deletion of residual modules mostly affects the long paths. Even after deleting 10 residual modules, many of the effective paths between 5 and 17 modules long are still valid. Since mainly the effective paths are important for performance, this result is in line with the experiment shown in Figure 5 (a). Performance only drops slightly up to the removal of 10 residual modules, however, for the removal of 20 modules, we observe a severe drop in performance.
1605.06431#25
1605.06431#27
1605.06431
[ "1603.09382" ]
1605.06431#27
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
7 remaining paths after deleting d modules Residual network vs. stochastic depth error when dropping any single block â delete 1 module â residual network v2, 110 layers â delete 10 modules| â stochastic depth, 110 layers, d = 0.5 linear deca: â delete 20 modules| effective paths fraction of remaining paths ° â 10 20 30 0 50 path length dropped layer index 20 30 40 50 # (CIFAR-1 Figure 7: Fraction of paths remain- ing after deleting individual layers. Deleting layers mostly affects long paths through the networks.
1605.06431#26
1605.06431#28
1605.06431
[ "1603.09382" ]
1605.06431#28
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
_ Figure 8: Impact of stochastic depth on resilience to layer deletion. Training with stochastic depth only improves re- silience slightly, indicating that plain residual networks al- ready donâ t depend on individual layers. Compare to Fig. 3. Connection to highway networks In highway networks, ti(·) multiplexes data ï¬ ow through the residual and skip connections and ti(·) = 0.5 means both paths are used equally. For highway networks in the wild, [19] observe empirically that the gates commonly deviate from ti(·) = 0.5. In particular, they tend to be biased toward sending data through the skip connection; in other words, the network learns to use short paths. Similar to our results, it reinforces the importance of short paths. Effect of stochastic depth training procedure Recently, an alternative training procedure for resid- ual networks has been proposed, referred to as stochastic depth [9]. In that approach a random subset of the residual modules is selected for each mini-batch during training. The forward and backward pass is only performed on those modules. Stochastic depth does not affect the number of paths in the network because all paths are available at test time. However, it changes the distribution of paths seen during training. In particular, mainly short paths are seen. Further, by selecting a different subset of short paths in each mini-batch, it encourages the paths to produce good results independently.
1605.06431#27
1605.06431#29
1605.06431
[ "1603.09382" ]
1605.06431#29
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
Does this training procedure signiï¬ cantly reduce the dependence between paths? We repeat the experiment of deleting individual modules for a residual network trained using stochastic depth. The result is shown in Figure 8. Training with stochastic depth improves resilience slightly; only the dependence on the downsampling layers seems to be reduced. By now, this is not surprising: we know that plain residual networks already donâ t depend on individual layers. # 7 Conclusion What is the reason behind residual networksâ increased performance? In the most recent iteration of residual networks, He et al. [6] provide one hypothesis: â We obtain these results via a simple but essential conceptâ going deeper.â While it is true that they are deeper than previous approaches, we present a complementary explanation. First, our unraveled view reveals that residual networks can be viewed as a collection of many paths, instead of a single ultra deep network. Second, we perform lesion studies to show that, although these paths are trained jointly, they do not strongly depend on each other. Moreover, they exhibit ensemble-like behavior in the sense that their performance smoothly correlates with the number of valid paths. Finally, we show that the paths through the network that contribute gradient during training are shorter than expected. In fact, deep paths are not required during training as they do not contribute any gradient. Thus, residual networks do not resolve the vanishing gradient problem by preserving gradient ï¬ ow throughout the entire depth of the network. This insight reveals that depth is still an open research question. These promising observations provide a new lens through which to examine neural networks.
1605.06431#28
1605.06431#30
1605.06431
[ "1603.09382" ]
1605.06431#30
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
# Acknowledgements We would like to thank Sam Kwak and Theofanis Karaletsos for insightful feedback. We also thank the reviewers of NIPS 2016 for their very constructive and helpful feedback and for suggesting the paper title. This work is partly funded by AOL through the Connected Experiences Laboratory (Author 1), an NSF Graduate Research Fellowship award (NSF DGE-1144153, Author 2), and a Google Focused Research award (Author 3). 8 # References
1605.06431#29
1605.06431#31
1605.06431
[ "1603.09382" ]
1605.06431#31
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
[1] Yoshua Bengio, Patrice Simard, and Paolo Frasconi. Learning long-term dependencies with gradient descent is difï¬ cult. IEEE Transactions on Neural Networks, 5(2):157â 166, 1994. [2] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Conference on Computer Vision and Pattern Recognition, 2009. [3] Harris Drucker, Corinna Cortes, Lawrence D. Jackel, Yann LeCun, and Vladimir Vapnik. Boosting and other ensemble methods. Neural Computation, 6(6):1289â
1605.06431#30
1605.06431#32
1605.06431
[ "1603.09382" ]
1605.06431#32
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
1301, 1994. [4] Kunihiko Fukushima. Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biological Cybernetics, 36(4):193â 202, 1980. [5] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015. [6] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. arXiv preprint arXiv:1603.05027, 2016. [7] Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhut- dinov. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580, 2012. [8] Sepp Hochreiter. Untersuchungen zu dynamischen neuronalen netzen.
1605.06431#31
1605.06431#33
1605.06431
[ "1603.09382" ]
1605.06431#33
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
Masterâ s thesis, Institut fur Informatik, Technische Universitat, Munchen, 1991. [9] Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Weinberger. Deep networks with stochastic depth. arXiv preprint arXiv:1603.09382, 2016. [10] David H Hubel and Torsten N Wiesel. Receptive ï¬ elds, binocular interaction and functional architecture in the catâ s visual cortex.
1605.06431#32
1605.06431#34
1605.06431
[ "1603.09382" ]
1605.06431#34
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
The Journal of Physiology, 160(1):106â 154, 1962. [11] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning, 2015. [12] Alex Krizhevsky. Learning multiple layers of features from tiny images, 2009. [13] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬ cation with deep convolutional neural networks. In Advances in Neural Information Processing Systems, 2012. [14] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition.
1605.06431#33
1605.06431#35
1605.06431
[ "1603.09382" ]
1605.06431#35
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
Proceedings of the IEEE, 86(11):2278â 2324, 1998. [15] Jitendra Malik and Pietro Perona. Preattentive texture discrimination with early vision mecha- nisms. Journal of the Optical Society of America, 1990. [16] Robert E Schapire. The strength of weak learnability. Machine Learning, 5(2):197â 227, 1990. [17] Thomas Serre, Aude Oliva, and Tomaso Poggio. A feedforward architecture accounts for rapid categorization.
1605.06431#34
1605.06431#36
1605.06431
[ "1603.09382" ]
1605.06431#36
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
Proceedings of the National Academy of Sciences, 104(15):6424â 6429, 2007. [18] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. [19] Rupesh Kumar Srivastava, Klaus Greff, and Jürgen Schmidhuber. Highway networks. arXiv preprint arXiv:1505.00387, 2015. [20] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich.
1605.06431#35
1605.06431#37
1605.06431
[ "1603.09382" ]
1605.06431#37
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
Going deeper with convolutions. In Conference on Computer Vision and Pattern Recognition, pages 1â 9, 2015. [21] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfel- low, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013. [22] Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? In Advances in Neural Information Processing Systems, 2014.
1605.06431#36
1605.06431#38
1605.06431
[ "1603.09382" ]
1605.06431#38
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
[23] Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In Computer Visionâ ECCV 2014, pages 818â 833. Springer, 2014. 9
1605.06431#37
1605.06431
[ "1603.09382" ]
1605.04711#0
Ternary Weight Networks
2 2 0 2 v o N 0 2 ] V C . s c [ 3 v 1 1 7 4 0 . 5 0 6 1 : v i X r a # TERNARY WEIGHT NETWORKS Fengfu Li1â , Bin Liu2â , Xiaoxing Wang2, Bo Zhang1â , Junchi Yan2â 1Institute of Applied Math., AMSS, CAS, Beijing, China [email protected], [email protected] 2MOE Key Lab of Artiï¬ cial Intelligence, Shanghai Jiao Tong University, Shanghai, China {binliu_sjtu, ï¬ gure1_wxx, yanjunchi}@sjtu.edu.cn # ABSTRACT We present a memory and computation efï¬ cient ternary weight networks (TWNs) - with weights constrained to +1, 0 and -1. The Euclidian distance between full (ï¬ oat or double) precision weights and the ternary weights along with a scaling factor is minimized in training stage. Besides, a threshold-based ternary function is optimized to get an approximated solution which can be fast and easily computed. TWNs have shown better expressive abilities than binary precision counterparts. Mean- while, TWNs achieve up to 16à model compression rate and need fewer multiplications compared with the ï¬ oat32 precision counterparts. Extensive experiments on MNIST, CIFAR-10, and ImageNet datasets show that the TWNs achieve much bet- ter result than the Binary-Weight-Networks (BWNs) and the classiï¬ cation performance on MNIST and CIFAR-10 is very close to the full precision networks. We also verify our method on object detection task and show that TWNs signiï¬ cantly outperforms BWN by more than 10% mAP on PASCAL VOC dataset. The pytorch version of source code is available at: https://github.com/Thinklab-SJTU/twns. are BinaryNet [11] and XNOR-Net [7] where both weights and activations are binary-valued. These models eliminate most of the multiplications in the forward and backward prop- agations, and thus own the potential of gaining signiï¬ cant beneï¬
1605.04711#1
1605.04711
[ "1602.07360" ]
1605.04711#1
Ternary Weight Networks
ts with specialized deep learning (DL) hardware by re- placing many multiply-accumulate operations by simple ac- cumulation [12]. Besides, binary weight networks achieve up to 32à model compression rate. Despite the binary tech- niques, some other compression methods focus on identifying models with few parameters while preserving accuracy by compressing existing state-of-the-art DNN models in a lossy way. SqueezeNet [13] is such a model that has 50à fewer parameters than AlexNet [2] but maintains AlexNet-level ac- curacy on ImageNet. MobileNet [14] and Shufï¬ eNet [15] propose lightweight architectures to reduce the parameters and computation cost. Other methods propose to search efï¬ cient ar- chitectures and achieves great performance on both classiï¬ ca- tion [16, 17] and object detection [18]. Deep Compression [9] is another most recently proposed method that uses pruning, trained quantization and huffman coding for compressing neu- ral networks. It reduced the storage requirement of AlexNet and VGG-16 [3] by 35Ã
1605.04711#0
1605.04711#2
1605.04711
[ "1602.07360" ]
1605.04711#2
Ternary Weight Networks
and 49à , respectively, without loss of accuracy. This paper has the following contributions: # 1. INTRODUCTION AND RELATED WORK Deep neural networks (DNN) have made signiï¬ cant improve- ments in lots of computer vision tasks such as object recog- nition [1, 2, 3, 4] and object detection [5, 6]. This motivates interests to deploy the state-of-the-art DNN models to real world applications like smart phones, wearable embedded de- vices or other edge computing devices. However, these models often need considerable storage and computational power [7], and can easily overburden the limited storage, battery power, and computer capabilities of the smart wearable embedded devices. As a result, it remains a challenge for the deployment. To mitigate the storage and computational problem [8, 9], methods that seek to binarize weights or activations in DNN models have been proposed. BinaryConnect [10] uses a single sign function to binarize the weights. Binary Weight Net- works [7] adopts the same binarization function but adds an extra scaling factor.
1605.04711#1
1605.04711#3
1605.04711
[ "1602.07360" ]
1605.04711#3
Ternary Weight Networks
The extensions of the previous methods â : Equal contribution. â Correspondence authors. 1) To our best knowledge, this was the ï¬ rst (at least at its debut in arxiv) ternary weight quantization scheme to reduce storage and computational cost for deep neural networks. 2) We propose an approximated and universal solution with threshold-based ternary function for calculating the ternary weights of the raw neural networks. 3) Experiments show the efï¬ cacy of our approach on public benchmarks for both image classiï¬ cation and detection. # 2. TERNARY WEIGHT NETWORKS # 2.1. Advantage Overview We address the limited storage and computational resources issues by introducing ternary weight networks (TWNs), which constrain the weights to be ternary-valued: +1, 0 and -1. TWNs seek to make a balance between the full precision weight net- works (FPWNs) counterparts and the binary precision weight networks (BPWNs) counterparts.
1605.04711#2
1605.04711#4
1605.04711
[ "1602.07360" ]
1605.04711#4
Ternary Weight Networks
The detailed features are listed as follows. Expressive ability In most recent network architectures such as VGG [3], GoogLeNet [4] and ResNet [1], a most commonly used convolutional ï¬ lter is of size 3à 3. With binary precision, there is only 23à 3 = 512 templates. However, a ternary ï¬ lter with the same size owns 33à 3 = 19683 templates, which gains 38Ã
1605.04711#3
1605.04711#5
1605.04711
[ "1602.07360" ]
1605.04711#5
Ternary Weight Networks
more stronger expressive abilities than the binary counterpart. Model compression In TWNs, 2-bit storage requirement is needed for a unit of weight. Thus, TWNs achieve up to 16à model compression rate compared with the ï¬ oat32 precision counterparts. Take VGG-19 [3] as an example, ï¬ oat version of the model needs â ¼500M storage requirement, which can be reduced to â ¼32M with ternary precision. Thus, although the compression rate of TWNs is 2à less than that of BPWNs, it is fair enough for compressing most of the existing state-of- the-art DNN models. Computational requirement Compared with the BP- WNs, TWNs own an extra zero state. However, the zero terms need not be accumulated for any multiple operations. Thus, the multiply-accumulate operations in TWNs keep unchanged compared with binary precision counterparts. As a result, it is also hardware-friendly for training large-scale networks with specialized DL hardware. In the following parts, we will give detailed descriptions about the ternary weight networks problem and an approx- imated but efï¬
1605.04711#4
1605.04711#6
1605.04711
[ "1602.07360" ]
1605.04711#6
Ternary Weight Networks
cient solution. After that, a simple training algorithm with error back-propagation is introduced and the run time usage is described at last. 2.2. Problem Formulation To make the ternary weight networks perform well, we seek to minimize the Euclidian distance between the full precision weights W and the ternary-valued weights Ë W along with a nonnegative scaling factor α [7]. The optimization problem is formulated as follows, ~ ~ ~ 2 a*,W* =argmin J(a,W) = \|w â aWw ow. 2 (1) W; â ¬ {-1,0, 41} ,6=1,2..n st. a >O0, Here n is the number of the ï¬ lter. With the approximation W â α Ë W, a basic block of forward propagation in ternary weight networks is as follows, Z=X*W~ Xx (aW) = (aX) OW 2) Xrert = g(Z) ¢ where X is the input of the block; â is a convolution or in- ner product operation; g is a nonlinear activation function; â indicates a convolution or an inner product operation with- out multiplication; Z is the output feature map of the neural network block. It can also be used as input of the next block. 2.3. Threshold-based Ternary Function One way to solve the optimization Eq. 1 is to expand the cost function J(α, Ë W) and take the derivative w.r.t. α and Ë Wi is respectively. However, this would get interdependent αâ and Ë Wâ i .
1605.04711#5
1605.04711#7
1605.04711
[ "1602.07360" ]
1605.04711#7
Ternary Weight Networks
Thus, there is no deterministic solution in this way [19]. To overcome this, we try to ï¬ nd an approximated optimal solution with a threshold-based ternary function, Ë Wi = f (Wi|â ) = +1 0 â 1 if Wi > â |Wi| â ¤ â if if Wi < â â (3) Here â is an positive threshold parameter. With Eq. 3, the original problem can be transformed to αâ , â â = arg min α⠥0,â >0 (|Wâ |α2 â 2( iâ Iâ |Wi|)α + câ ) (4) where I, = {i||W;| > A} and |I,| denotes the number of elements in Iy; cq = Viers: w? is a a independent con- stant. Thus, for any given A, the optimal a can be computed as follows, 1 an == WwW; (5) A= py (Wi A By substituting αâ which can be simpliï¬ ed as follows, â into Eq. 4, we get a â dependent equation, 1 A* = arg min â W;)? (6) going fT The above euqation has no straightforward solutions. Though discrete optimization can be made to solve the prob- lem (due to states of Wi is ï¬ nite), it should be very time consuming. As a viable alternative, we make a single as- sumption that Wi are generated from uniform or normal In case of Wi are uniformly distributed in distribution. [â α, α] and â lies in (0, α], the approximated â â is α 3 , which equals to 2 3 E(|W|). When Wi is generated from normal distributions N (0, Ï 2), the approximated â â is 0.6Ï which equals to 0.75E(|W|). Thus, we can use a rule of thumb that â â â
1605.04711#6
1605.04711#8
1605.04711
[ "1602.07360" ]
1605.04711#8
Ternary Weight Networks
0.75E(|W|) â 0.75 n 2.4. Training of Ternary-Weight-Networks CNNs typically includes Convolution layer, Fully-Connected layer, Pooling layer (e.g.,Max-Pooling, Avg-Pooling), Batch- Normalization (BN) layer [20] and Activation layer (e.g.,ReLU, Sigmoid), in TWNs, we also follow the traditional neural network block design philosophy, the order of layers in a typical ternary block of TWNs is shown in Fig. 1. We borrow the parameter optimization strategy which suc- cessfully applied from BinaryConncet [10] and XNOR-Net [7], in our design, ternarization only happens at the forward and backward pass in convolution and fully-connected layers, but in the parameters update stage, we still keep a copy of the Algorithm 1: Train a M-layers CNN w/ ternary weights Rwne mw Algorithm a M-layers ternary Inputs : A minibatch of inputs and targets (I, Y), loss function L(Y, Y) and current weight W'. Hyper-parameter : current learning rate 7)â
1605.04711#7
1605.04711#9
1605.04711
[ "1602.07360" ]
1605.04711#9
Ternary Weight Networks
. Outputs updated weight W'+!, updated learning rate nâ +!. Make the float32 weight filters as ternery ones: form = 1to M do for kâ â filter in m"â layer do Amk = 2 \|Wra lle Wrmk = {-1,0, +1}, refer to Eq,[3| Wrt?Winks Wink'Wmk Tink = mkWink Amk = 8 Ë Y = TernaryForward(I, Ë W, α) //standard forward propagation , Ë T ) //standard backward = TernaryBackward( â L â L â Ë T â Ë
1605.04711#8
1605.04711#10
1605.04711
[ "1602.07360" ]
1605.04711#10
Ternary Weight Networks
Y propagation except that gradients are computed using T instead of W t 10 W t+1 = UpdateParameters(W t, â L â T , ηt) // we use SGD in this paper 11 ηt+1 = UpdateLearningrate(ηt, t) //we use learning rate step decay in this paper Fig. 1. A typical Ternary block in TWNs. In the forward pass, we apply ternarization operation for the weight of convolution layer meanwhile the ï¬ oat32 weight will be cached for future parameter update; in the backward pass, we calculate ternary weight gradient to update the ï¬ oat32 weight. full-precision parameters. In addition, two effective tricks, Batch-Normalization and learning rate step decay that drops the learning rate by a factor every few epochs, are adopted. We use stochastic gradient descent (SGD) with momentum to update the the parameters when training TWNs, the detailed training strategy show in Table 1.
1605.04711#9
1605.04711#11
1605.04711
[ "1602.07360" ]
1605.04711#11
Ternary Weight Networks
2.5. Inference of Ternary-Weight-Networks In the forward pass, the scaling factor α could be transformed to the inputs according to Eq. 2. Thus, we only need to keep the ternary-valued weights and the scaling factors for deployment. This would results up to 16à model compression rate for deployment compared with the ï¬ oat32 precision counterparts. # 3. EXPERIMENTS AND DISCUSSION We benchmark Ternary Weight Networks (TWNs) with Bi- nary Weight Networks (BPWNs) and Full Precision Networks (FPWNs) on both classiï¬ cation task (MNIST, CIFAR-10 and ImageNet) and object detection task (PASCAL VOC).
1605.04711#10
1605.04711#12
1605.04711
[ "1602.07360" ]
1605.04711#12
Ternary Weight Networks
Table 1. Backbones and hyperparameters setting for different datasets used by our method on three benchmarks. MNIST CIFAR-10 ImageNet backbone architecture weight decay mini-batch size initial learning rate learning rate adjust step2 momentum LeNet-5 1e-4 50 0.01 15, 25 0.9 VGG-7 1e-4 100 0.1 80, 120 0.9 ResNet18B 1e-4 64(x4)1 0.1 30, 40, 50 0.9 For a fair comparison, we keep the following conï¬ gures to be same: network architecture, regularization method (L2 weight decay), learning rate scaling procedure (multi-step) and optimization method (SGD with momentum). BPWNs use sign function to binarize the weights and FPWNs use ï¬ oat- valued weights. See Table 1 for training conï¬ gurations. 3.1. Experiments of Classiï¬
1605.04711#11
1605.04711#13
1605.04711
[ "1602.07360" ]
1605.04711#13
Ternary Weight Networks
cation MNIST is a collection of handwritten digits. It is a very popular dataset in the ï¬ eld of image processing. The LeNet- 5 [21] architecture we used in MNIST experiment is â 32-C5 + MP2 + 64-C5 + MP2 + 512 FC + SVMâ which starts with a 5x5 convolutional block that includes a convolution layer, a BN layer and a relu layer. Then a max-pooling layer is followed with stride 2.
1605.04711#12
1605.04711#14
1605.04711
[ "1602.07360" ]
1605.04711#14
Ternary Weight Networks
The â FCâ is a fully connect block with 512 nodes. The top layer is a SVM classiï¬ er with 10 labels. Finally, hinge loss is minimized with SGD. CIFAR-10 consists of 10 classes with 6K color images of 32x32 resolution for each class. It is divided into 50K training and 10K testing images. We deï¬ ne a VGG inspired architecture, denoted as VGG-7, by â 2à (128-C3) + MP2 + 2à (256-C3) + MP2 + 2à (512-C3) + MP2 + 1024-FC + Soft- maxâ
1605.04711#13
1605.04711#15
1605.04711
[ "1602.07360" ]
1605.04711#15
Ternary Weight Networks
. Compared with the architecture in [10], we ignore the last fully connected layer. We follow the data augmentation in [1, 22] for training: 4 pixels are padded on each side, and a 32à 32 crop is randomly sampled from the padded image or its horizontal ï¬ ip. At testing time, we only evaluate the single view of the original 32à 32 image. ImageNet consists of about 1.2 million train images from 1000 categories and 50,000 validation images. ImageNet has higher resolution and greater diversity, is more close to real life than MNIST and CIFAR-10. We adopt the popu- lar ResNet18 architecture [1] as backbone. Besides, we also benchmark another enlarged counterpart whose number of ï¬ l- ters in each block is 1.5à of the original one which is termed as ResNet18B. In each training iteration, images are randomly cropped with 224à 224 size. We do not use any resize tricks [7] or any color augmentation.
1605.04711#14
1605.04711#16
1605.04711
[ "1602.07360" ]
1605.04711#16
Ternary Weight Networks
Table 2 shows the classiï¬ cation results. On the small datasets (MNIST and CIFAR-10), TWNs achieve similar per- 1We use 4 GPUs to speed up the training. 2Learning rate is divided by 10 at these epochs. 'â â Full precision (ResNet-18) Full precision (ResNet-18B) 1-2» Temary precision (ResNet-18) | |p-e-* Ternary precision (ResNet-18B) 1: }»â -#â «Binary precision (ResNetâ 18) Binary precision (ResNet-18B) 02 + 0 5 10 15 20 25 30 35 40 45 50 55 60 Epochs 0.9 0.75 Full precision (VGG7~128) Temary precision (VGG7-128) Binary precision (VGG7-128) + 0 2 40 60 80 100 120 140 160 180 Epochs 0.995 0.99 0.985 0.98 + -- 0.975 0.97 â Accuracy 0.965 + 0.96 + ~ _ |e Ternary precision (LeNet-5) Binary precision (LeNet-5) 0.955 0.95 +# 15 20 Epochs 2 30 35 «40 0.995 0.99 0.9 0.985 0.98 + -- 0.975 0.97 â Accuracy 0.965 + 'â â Full precision (ResNet-18) Full precision (ResNet-18B) 1-2» Temary precision (ResNet-18) 0.96 + ~ _ |e Ternary precision (LeNet-5) 0.75 Binary precision (LeNet-5) 0.955 | |p-e-* Ternary precision (ResNet-18B) 1: }»â -#â «Binary precision (ResNetâ
1605.04711#15
1605.04711#17
1605.04711
[ "1602.07360" ]
1605.04711#17
Ternary Weight Networks
18) Binary precision (ResNet-18B) Full precision (VGG7~128) Temary precision (VGG7-128) Binary precision (VGG7-128) 0.95 +# + 15 20 0 2 Epochs (a) MNIST 2 30 35 «40 40 60 80 100 120 140 160 180 Epochs (b) CIFAR-10 02 + 0 5 10 15 20 25 30 35 40 45 50 55 60 Epochs (c) ImageNet (top-5) (b) CIFAR-10 Fig. 2. Classiï¬ cation accuracy over training epochs MNIST (top-1 accuracy), CIFAR10 (top-1) and ImageNet (top-5). # (a) MNIST # (c) ImageNet (top-5)
1605.04711#16
1605.04711#18
1605.04711
[ "1602.07360" ]
1605.04711#18
Ternary Weight Networks
Table 2. Classiï¬ cation accuracy (%) on ImageNet with ResNet18 (or ResNet18B in bracket) as backbones. MNIST CIFAR-10 ImageNet (top-1) ImageNet (top-5) 99.35 99.05 99.41 98.82 98.60 - - 92.56 90.18 92.88 91.73 89.85 - - 61.80 (65.3) 57.50 (61.6) 65.4 (67.6) - - 60.8 51.2 84.20 (86.2) 81.20 (83.9) 86.76 (88.0) - - 83.0 73.2 TWNs (our main approach) BPWNs (binary precision counterpart) FPWNs (full precision counterpart) BinaryConnect [10] Binarized Neural Networks [11] Binary Weight Networks [7] XNOR-Net [7] Table 3. Detection performance (%) on PASCAL VOC with YOLOv5 (small) as detector on Pascal VOC. Precision Recall mAP_50 mAP_50:95 TWNs (our main approach) BPWNs (binary precision counterpart) FPWNs (full precision counterpart) 78.0% 69.8% 83.3% 69.1% 56.7% 80.8% 76.8% 62.9% 86.7% 51.5% 39.4% 63.7% formance as FPWNs, while beats BPWNs. On the large-scale ImageNet dataset, BPWNs and TWNs both get poorer per- formance than FPWNs. However, the accuracy gap between TWNs and FPWNs is smaller than the gap between BPWNs and TWNs. In addition, when we change the backbone from ResNet18 to ResNet18B, as the model size is larger, the perfor- mance gap between TWNs (or BPWNs) and FPWNs has been reduced. This indicates low precision networks gain more merits from larger models than the full precision counterparts.
1605.04711#17
1605.04711#19
1605.04711
[ "1602.07360" ]
1605.04711#19
Ternary Weight Networks
The validation accuracy curves of different approaches across all training epochs on MNIST, CIFAR-10 and Ima- geNet datasets illustrate in Fig. 2. As we can see in the ï¬ gure, obviously, BPWNs converge slowly and the training loss is not stable compared with TWNs and FPWNs. However, TWNs converge almost as fast and stably as FPWNs. 3.2. Experiments of Detection PASCAL VOC [23] consists of 20 classes with 11540 images and 27450 labeled objects. We adopt the popular YOLOv5 (small) [24] architecture and compare the performance of full precision, binary precision and ternary precision in Table 3.
1605.04711#18
1605.04711#20
1605.04711
[ "1602.07360" ]
1605.04711#20
Ternary Weight Networks
Speciï¬ cally, we initialize each model by the weights trained on MS-COCO dataset [25] (provided by YOLOv5) and ï¬ ne- tune each model by 150 epochs. We observe that TWNs signiï¬ cantly outperforms BPWNs by more than 10% mAP, showing the great effectiveness of our method. 4. CONCLUSION In this paper, we have introduced the simple, efï¬ cient, and accurate ternary weight networks for real world AI application which can reduce the memory usage about 16x and the the computation about 2x. We present the optimization problem of TWNs and give an approximated solution with a simple but effective ternary function. The proposed TWNs achieve a balance between accuracy and model compression rate as well as potentially low computational requirements of BPWNs. Empirical results on public benchmarks show the superior performance of the proposed method.
1605.04711#19
1605.04711#21
1605.04711
[ "1602.07360" ]
1605.04711#21
Ternary Weight Networks
5. REFERENCES [1] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, â Deep residual learning for image recognition,â arXiv preprint arXiv:1512.03385, 2015. [2] A. Krizhevsky, I. Sutskever, and G. E. Hinton, â Imagenet classiï¬ cation with deep convolutional neural networks,â Advances in neural information processing systems, p. 1097â 1105, 2012. [3] K. Simonyan and A. Zisserman, â
1605.04711#20
1605.04711#22
1605.04711
[ "1602.07360" ]