doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
1704.07138
42
Intelligent Search Strategies for Computer Problem Solving. Addison- Wesley Longman Publishing Co., Inc., Boston, MA, USA. and Michael Collins. 2013. Optimal beam search for machine In Proceedings of the 2013 Confer- translation. ence on Empirical Methods in Natural Language Processing. Association for Computational Linguis- tics, Seattle, Washington, USA, pages 210–221. http://www.aclweb.org/anthology/D13-1022. Alexander M. Rush, Sumit Chopra, and Jason We- ston. 2015. A neural attention model for abstrac- tive sentence summarization. In Llus Mrquez, Chris Callison-Burch, Jian Su, Daniele Pighin, and Yuval Marton, editors, EMNLP. The Association for Com- putational Linguistics, pages 379–389. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words the with subword units. 54th Annual Meeting of the Association for Com- putational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. http://aclweb.org/anthology/P/P16/P16-1162.pdf.
1704.07138#42
Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search
We present Grid Beam Search (GBS), an algorithm which extends beam search to allow the inclusion of pre-specified lexical constraints. The algorithm can be used with any model that generates a sequence $ \mathbf{\hat{y}} = \{y_{0}\ldots y_{T}\} $, by maximizing $ p(\mathbf{y} | \mathbf{x}) = \prod\limits_{t}p(y_{t} | \mathbf{x}; \{y_{0} \ldots y_{t-1}\}) $. Lexical constraints take the form of phrases or words that must be present in the output sequence. This is a very general way to incorporate additional knowledge into a model's output without requiring any modification of the model parameters or training data. We demonstrate the feasibility and flexibility of Lexically Constrained Decoding by conducting experiments on Neural Interactive-Predictive Translation, as well as Domain Adaptation for Neural Machine Translation. Experiments show that GBS can provide large improvements in translation quality in interactive scenarios, and that, even without any user input, GBS can be used to achieve significant gains in performance in domain adaptation scenarios.
http://arxiv.org/pdf/1704.07138
Chris Hokamp, Qun Liu
cs.CL
Accepted as a long paper at ACL 2017
null
cs.CL
20170424
20170502
[ { "id": "1611.01874" } ]
1704.07138
43
Iulian V. Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2016. Building systems using generative end-to-end dialogue hierarchical neural network models. In Proceedings of the Thirtieth AAAI Conference on Artificial Intel- ligence. AAAI Press, AAAI’16, pages 3776–3783. http://dl.acm.org/citation.cfm?id=3016387.3016435. Jason R. Smith, Herve Saint-amand, Chris Callison- burch, Magdalena Plamada, and Adam Lopez. 2013. Dirt cheap web-scale parallel text from the common In In Proceedings of the Conference of the crawl. Association for Computational Linguistics (ACL. Lucia Specia. 2011. Exploiting objective annotations for measuring translation post-editing effort. In Pro- ceedings of the European Association for Machine Translation. May. Ralf Steinberger, Bruno Pouliquen, Anna Widiger, Camelia Ignat, Toma Erjavec, and Dan Tufi. 2006. The jrc-acquis: A multilingual aligned parallel cor- pus with 20+ languages. In In Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC’2006. pages 2142–2147.
1704.07138#43
Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search
We present Grid Beam Search (GBS), an algorithm which extends beam search to allow the inclusion of pre-specified lexical constraints. The algorithm can be used with any model that generates a sequence $ \mathbf{\hat{y}} = \{y_{0}\ldots y_{T}\} $, by maximizing $ p(\mathbf{y} | \mathbf{x}) = \prod\limits_{t}p(y_{t} | \mathbf{x}; \{y_{0} \ldots y_{t-1}\}) $. Lexical constraints take the form of phrases or words that must be present in the output sequence. This is a very general way to incorporate additional knowledge into a model's output without requiring any modification of the model parameters or training data. We demonstrate the feasibility and flexibility of Lexically Constrained Decoding by conducting experiments on Neural Interactive-Predictive Translation, as well as Domain Adaptation for Neural Machine Translation. Experiments show that GBS can provide large improvements in translation quality in interactive scenarios, and that, even without any user input, GBS can be used to achieve significant gains in performance in domain adaptation scenarios.
http://arxiv.org/pdf/1704.07138
Chris Hokamp, Qun Liu
cs.CL
Accepted as a long paper at ACL 2017
null
cs.CL
20170424
20170502
[ { "id": "1611.01874" } ]
1704.07138
44
Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with the 27th In Proceedings of International Conference on Neural Informa- tion Processing Systems. MIT Press, Cam- bridge, MA, USA, NIPS’14, pages 3104–3112. http://dl.acm.org/citation.cfm?id=2969033.2969173. Zhaopeng Tu, Yang Liu, Lifeng Shang, Xiaohua Liu, and Hang Li. 2016. Neural machine translation with reconstruction. arXiv preprint arXiv:1611.01874 . Bart van Merrinboer, Dzmitry Bahdanau, Vincent Du- moulin, Dmitriy Serdyuk, David Warde-Farley, Jan Chorowski, and Yoshua Bengio. 2015. Blocks and fuel: Frameworks for deep learning. CoRR abs/1506.00619.
1704.07138#44
Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search
We present Grid Beam Search (GBS), an algorithm which extends beam search to allow the inclusion of pre-specified lexical constraints. The algorithm can be used with any model that generates a sequence $ \mathbf{\hat{y}} = \{y_{0}\ldots y_{T}\} $, by maximizing $ p(\mathbf{y} | \mathbf{x}) = \prod\limits_{t}p(y_{t} | \mathbf{x}; \{y_{0} \ldots y_{t-1}\}) $. Lexical constraints take the form of phrases or words that must be present in the output sequence. This is a very general way to incorporate additional knowledge into a model's output without requiring any modification of the model parameters or training data. We demonstrate the feasibility and flexibility of Lexically Constrained Decoding by conducting experiments on Neural Interactive-Predictive Translation, as well as Domain Adaptation for Neural Machine Translation. Experiments show that GBS can provide large improvements in translation quality in interactive scenarios, and that, even without any user input, GBS can be used to achieve significant gains in performance in domain adaptation scenarios.
http://arxiv.org/pdf/1704.07138
Chris Hokamp, Qun Liu
cs.CL
Accepted as a long paper at ACL 2017
null
cs.CL
20170424
20170502
[ { "id": "1611.01874" } ]
1704.07138
45
Tsung-Hsien Wen, Milica Gaˇsi´c, Nikola Mrkˇsi´c, Pei- Hao Su, David Vandyke, and Steve Young. 2015. Semantically conditioned lstm-based natural lan- guage generation for spoken dialogue systems. In Proceedings of the 2015 Conference on Em- pirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguis- tics. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, ukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. CoRR abs/1609.08144. http://arxiv.org/abs/1609.08144.
1704.07138#45
Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search
We present Grid Beam Search (GBS), an algorithm which extends beam search to allow the inclusion of pre-specified lexical constraints. The algorithm can be used with any model that generates a sequence $ \mathbf{\hat{y}} = \{y_{0}\ldots y_{T}\} $, by maximizing $ p(\mathbf{y} | \mathbf{x}) = \prod\limits_{t}p(y_{t} | \mathbf{x}; \{y_{0} \ldots y_{t-1}\}) $. Lexical constraints take the form of phrases or words that must be present in the output sequence. This is a very general way to incorporate additional knowledge into a model's output without requiring any modification of the model parameters or training data. We demonstrate the feasibility and flexibility of Lexically Constrained Decoding by conducting experiments on Neural Interactive-Predictive Translation, as well as Domain Adaptation for Neural Machine Translation. Experiments show that GBS can provide large improvements in translation quality in interactive scenarios, and that, even without any user input, GBS can be used to achieve significant gains in performance in domain adaptation scenarios.
http://arxiv.org/pdf/1704.07138
Chris Hokamp, Qun Liu
cs.CL
Accepted as a long paper at ACL 2017
null
cs.CL
20170424
20170502
[ { "id": "1611.01874" } ]
1704.07138
46
Joern Wuebker, Spence Green, John DeNero, Sasa Hasan, and Minh-Thang Luong. 2016. Models and inference for prefix-constrained machine trans- In Proceedings of the 54th Annual Meet- lation. ing of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Compu- tational Linguistics, Berlin, Germany, pages 66–75. http://www.aclweb.org/anthology/P16-1007. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual atten- tion. In David Blei and Francis Bach, editors, Proceedings of the 32nd International Conference on Machine Learning (ICML-15). JMLR Workshop and Conference Proceedings, pages 2048–2057. http://jmlr.org/proceedings/papers/v37/xuc15.pdf. Matthew D. Zeiler. 2012. ADADELTA: an adap- tive learning rate method. CoRR abs/1212.5701. http://arxiv.org/abs/1212.5701.
1704.07138#46
Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search
We present Grid Beam Search (GBS), an algorithm which extends beam search to allow the inclusion of pre-specified lexical constraints. The algorithm can be used with any model that generates a sequence $ \mathbf{\hat{y}} = \{y_{0}\ldots y_{T}\} $, by maximizing $ p(\mathbf{y} | \mathbf{x}) = \prod\limits_{t}p(y_{t} | \mathbf{x}; \{y_{0} \ldots y_{t-1}\}) $. Lexical constraints take the form of phrases or words that must be present in the output sequence. This is a very general way to incorporate additional knowledge into a model's output without requiring any modification of the model parameters or training data. We demonstrate the feasibility and flexibility of Lexically Constrained Decoding by conducting experiments on Neural Interactive-Predictive Translation, as well as Domain Adaptation for Neural Machine Translation. Experiments show that GBS can provide large improvements in translation quality in interactive scenarios, and that, even without any user input, GBS can be used to achieve significant gains in performance in domain adaptation scenarios.
http://arxiv.org/pdf/1704.07138
Chris Hokamp, Qun Liu
cs.CL
Accepted as a long paper at ACL 2017
null
cs.CL
20170424
20170502
[ { "id": "1611.01874" } ]
1704.07138
47
Ventsislav Zhechev. 2012. Machine Translation Infras- tructure and Post-editing Performance at Autodesk. In AMTA 2012 Workshop on Post-Editing Technol- ogy and Practice (WPTP 2012). Association for Ma- chine Translation in the Americas (AMTA), San Diego, USA, pages 87–96. # A NMT System Configurations We train all systems for 500000 iterations, with validation every 5000 steps. The best single model from validation is used in all of the experiments for a language pair. We use £2 regularization on all pa- rameters with a = le~°. Dropout is used on the output layers with p(drop) = 0.5. We sort mini- batches by source sentence length, and reshuffle training data after each epoch. All systems use a bidirectional GRUs (Cho et al., 2014) to create the source representation and GRUs for the decoder transition. We use AdaDelta (Zeiler, 2012) to update gradients, and clip large gradients to 1.0.
1704.07138#47
Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search
We present Grid Beam Search (GBS), an algorithm which extends beam search to allow the inclusion of pre-specified lexical constraints. The algorithm can be used with any model that generates a sequence $ \mathbf{\hat{y}} = \{y_{0}\ldots y_{T}\} $, by maximizing $ p(\mathbf{y} | \mathbf{x}) = \prod\limits_{t}p(y_{t} | \mathbf{x}; \{y_{0} \ldots y_{t-1}\}) $. Lexical constraints take the form of phrases or words that must be present in the output sequence. This is a very general way to incorporate additional knowledge into a model's output without requiring any modification of the model parameters or training data. We demonstrate the feasibility and flexibility of Lexically Constrained Decoding by conducting experiments on Neural Interactive-Predictive Translation, as well as Domain Adaptation for Neural Machine Translation. Experiments show that GBS can provide large improvements in translation quality in interactive scenarios, and that, even without any user input, GBS can be used to achieve significant gains in performance in domain adaptation scenarios.
http://arxiv.org/pdf/1704.07138
Chris Hokamp, Qun Liu
cs.CL
Accepted as a long paper at ACL 2017
null
cs.CL
20170424
20170502
[ { "id": "1611.01874" } ]
1704.07138
48
Training Configurations EN-DE Embedding Size Recurrent Layers Size Source Vocab Size Target Vocab Size Batch Size EN-FR Embedding Size Recurrent Layers Size Source Vocab Size Target Vocab Size Batch Size EN-PT Embedding Size Recurrent Layers Size Source Vocab Size Target Vocab Size Batch Size 300 1000 80000 90000 50 300 1000 66000 74000 40 200 800 60000 74000 40 # A.1 English-German Our English-German training corpus consists of 4.4 Million segments from the Europarl (Bojar et al., 2015) and CommonCrawl (Smith et al., 2013) corpora. # A.2 English-French Our English-French training corpus consists of 4.9 Million segments from the Europarl and Com- monCrawl corpora. # A.3 English-Portuguese Our English-Portuguese training corpus consists of 28.5 Million segments from the Europarl, JRCAquis (Steinberger et al., 2006) and OpenSubti- tles5 corpora. 5http://www.opensubtitles.org/
1704.07138#48
Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search
We present Grid Beam Search (GBS), an algorithm which extends beam search to allow the inclusion of pre-specified lexical constraints. The algorithm can be used with any model that generates a sequence $ \mathbf{\hat{y}} = \{y_{0}\ldots y_{T}\} $, by maximizing $ p(\mathbf{y} | \mathbf{x}) = \prod\limits_{t}p(y_{t} | \mathbf{x}; \{y_{0} \ldots y_{t-1}\}) $. Lexical constraints take the form of phrases or words that must be present in the output sequence. This is a very general way to incorporate additional knowledge into a model's output without requiring any modification of the model parameters or training data. We demonstrate the feasibility and flexibility of Lexically Constrained Decoding by conducting experiments on Neural Interactive-Predictive Translation, as well as Domain Adaptation for Neural Machine Translation. Experiments show that GBS can provide large improvements in translation quality in interactive scenarios, and that, even without any user input, GBS can be used to achieve significant gains in performance in domain adaptation scenarios.
http://arxiv.org/pdf/1704.07138
Chris Hokamp, Qun Liu
cs.CL
Accepted as a long paper at ACL 2017
null
cs.CL
20170424
20170502
[ { "id": "1611.01874" } ]
1705.00557
0
7 1 0 2 r p A 3 2 ] L C . s c [ 1 v 7 5 5 0 0 . 5 0 7 1 : v i X r a # Discourse-Based Objectives for Fast Unsupervised Sentence Representation Learning Yacine Jernite Department of Computer Science New York University [email protected] Samuel R. Bowman Department of Linguistics and Center for Data Science New York University [email protected] David Sontag Department of EECS Massachussets Institute of Technology [email protected] # Abstract This work presents a novel objective func- tion for the unsupervised training of neu- ral network sentence encoders. It exploits signals from paragraph-level discourse co- herence to train these models to under- stand text. Our objective is purely discrim- inative, allowing us to train models many times faster than was possible under prior methods, and it yields models which per- form well in extrinsic evaluations. Task 1: ORDER classifier Task 2: CONJUNCTION classifier Task 3: NEXT classifier Sentence(s) 1 Sentence(s) 2 Figure 1: We train a sentence encoder (shown as two copies with shared parameters) on three discourse-based objectives over unlabeled text. # Introduction
1705.00557#0
Discourse-Based Objectives for Fast Unsupervised Sentence Representation Learning
This work presents a novel objective function for the unsupervised training of neural network sentence encoders. It exploits signals from paragraph-level discourse coherence to train these models to understand text. Our objective is purely discriminative, allowing us to train models many times faster than was possible under prior methods, and it yields models which perform well in extrinsic evaluations.
http://arxiv.org/pdf/1705.00557
Yacine Jernite, Samuel R. Bowman, David Sontag
cs.CL, cs.LG, cs.NE, stat.ML
null
null
cs.CL
20170423
20170423
[]
1705.00557
1
Figure 1: We train a sentence encoder (shown as two copies with shared parameters) on three discourse-based objectives over unlabeled text. # Introduction Modern artificial neural network approaches to natural language understanding tasks like transla- tion (Sutskever et al., 2014; Cho et al., 2014), sum- marization (Rush et al., 2015), and classification (Yang et al., 2016) depend crucially on subsys- tems called sentence encoders that construct dis- tributed representations for sentences. These en- coders are typically implemented as convolutional (Kim, 2014), recursive (Socher et al., 2013), or recurrent neural networks (Mikolov et al., 2010) operating over a sentence’s words or characters (Zhang et al., 2015; Kim et al., 2016). ence from one sentence to the next. In most cases, for example, each sentence in a text should be both interpretable in context and relevant to the topic under discussion. Both of these properties depend on an understanding of the local context, which includes both relatively knowledge about the state of the world and the specific meanings of previ- ous sentences in the text. Thus, a model that is successfully trained to recognize discourse coher- ence must be able to understand the meanings of sentences as well as relate them to key pieces of knowledge about the world.
1705.00557#1
Discourse-Based Objectives for Fast Unsupervised Sentence Representation Learning
This work presents a novel objective function for the unsupervised training of neural network sentence encoders. It exploits signals from paragraph-level discourse coherence to train these models to understand text. Our objective is purely discriminative, allowing us to train models many times faster than was possible under prior methods, and it yields models which perform well in extrinsic evaluations.
http://arxiv.org/pdf/1705.00557
Yacine Jernite, Samuel R. Bowman, David Sontag
cs.CL, cs.LG, cs.NE, stat.ML
null
null
cs.CL
20170423
20170423
[]
1705.00557
2
Most of the early successes with sentence encoder-based models have been on tasks with ample training data, where it has been possible to train the encoders in a fully-supervised end-to-end setting. However, recent work has shown some success in using unsupervised pretraining with un- labeled data to both improve the performance of these methods and extend them to lower-resource settings (Dai and Le, 2015; Kiros et al., 2015; Ba- jgar et al., 2016). This paper presents a set of methods for unsu- pervised pretraining that train sentence encoders to recognize discourse coherence. When reading text, human readers have an expectation of coherHobbs (1979) presents a formal treatment of this phenomenon. He argues that for a discourse (here, a text) to be interpreted as coherent, any two adjacent sentences must be related by one of a few set kinds of coherence relations. For exam- ple, a sentence might be followed by another that elaborates on it, parallels it, or contrasts with it. While this treatment may not be adequate to cover the full complexity of language understanding, it allows Hobbs to show how identifying such rela- tions depends upon sentence understanding, coref- erence resolution, and commonsense reasoning.
1705.00557#2
Discourse-Based Objectives for Fast Unsupervised Sentence Representation Learning
This work presents a novel objective function for the unsupervised training of neural network sentence encoders. It exploits signals from paragraph-level discourse coherence to train these models to understand text. Our objective is purely discriminative, allowing us to train models many times faster than was possible under prior methods, and it yields models which perform well in extrinsic evaluations.
http://arxiv.org/pdf/1705.00557
Yacine Jernite, Samuel R. Bowman, David Sontag
cs.CL, cs.LG, cs.NE, stat.ML
null
null
cs.CL
20170423
20170423
[]
1705.00557
3
Recently proposed techniques (Kiros et al., 2015; Ramachandran et al., 2016) succeed in exploiting discourse coherence information of this kind to train sentence encoders, but rely on gener- ative objectives which require models to compute the likelihood of each word in a sentence at train- ing time. In this setting, a single epoch of train- ing on a typical (76M sentence) text corpus can take weeks, making further research difficult, and making it nearly impossible to scale these methods to the full volume of available unlabeled English text. In this work, we propose alternative objec- tives which exploit much of the same coherence information at greatly reduced cost. In particular, we propose three fast coherence- based pretraining tasks, show that they can be used together effectively in multitask training (Fig- ure 1), and evaluate models trained in this setting on the training tasks themselves and on standard text classification tasks.1 We find that our ap- proach makes it possible to learn to extract broadly useful sentence representations in hours. # 2 Related Work
1705.00557#3
Discourse-Based Objectives for Fast Unsupervised Sentence Representation Learning
This work presents a novel objective function for the unsupervised training of neural network sentence encoders. It exploits signals from paragraph-level discourse coherence to train these models to understand text. Our objective is purely discriminative, allowing us to train models many times faster than was possible under prior methods, and it yields models which perform well in extrinsic evaluations.
http://arxiv.org/pdf/1705.00557
Yacine Jernite, Samuel R. Bowman, David Sontag
cs.CL, cs.LG, cs.NE, stat.ML
null
null
cs.CL
20170423
20170423
[]
1705.00557
4
# 2 Related Work This work is inspired most directly by the Skip Thought approach of Kiros et al. (2015), which introduces the use of paragraph-level discourse in- formation for the unsupervised pretraining of sen- tence encoders. Since that work, three other pa- pers have presented improvements to this method (the SDAE of Hill et al. 2016, also Gan et al. 2016; Ramachandran et al. 2016). These improved methods are based on techniques and goals that are similar to ours, but all three involve models that explicitly generate full sentences during training time at considerable computational cost. In closely related work, Logeswaran et al. (2016) present a model that learns to order the sentences of a paragraph. While they focus on they show posi- learning to assess coherence, tive results on measuring sentence similarity us- ing their trained encoder. Alternately, the Fast- Sent model of Hill et al. (2016) is designed to work dramatically more quickly than systems like Skip Thought, but in service of this goal the stan- dard sentence encoder RNN is replaced with a low-capacity CBOW model. Their method does well on existing semantic textual similarity bench- marks, but its insensitivity to order places an upper bound on its performance in more intensive extrin- sic language understanding tasks.
1705.00557#4
Discourse-Based Objectives for Fast Unsupervised Sentence Representation Learning
This work presents a novel objective function for the unsupervised training of neural network sentence encoders. It exploits signals from paragraph-level discourse coherence to train these models to understand text. Our objective is purely discriminative, allowing us to train models many times faster than was possible under prior methods, and it yields models which perform well in extrinsic evaluations.
http://arxiv.org/pdf/1705.00557
Yacine Jernite, Samuel R. Bowman, David Sontag
cs.CL, cs.LG, cs.NE, stat.ML
null
null
cs.CL
20170423
20170423
[]
1705.00557
5
1All code, resources, and models involved in these exper- iments will be made available upon publication. Sentence Pair Label Relation A strong one at that. Then I became a woman. Y elaboration I saw flowers on the ground. I heard birds in the trees. N list It limped closer at a slow pace. Soon it stopped in front of us. N spatial I kill Ben, you leave by yourself. I kill your uncle, you join Ben. Y time Table 1: The binary ORDER objective. Discourse relation labels are provided for the reader, but are not available to the model. Looking beyond work on unsupervised pre- training: Li and Hovy (2014) and Li and Juraf- sky (2016) use representation learning systems to directly model the problem of sentence order re- covery, but focus primarily on intrinsic evalua- tion rather than transfer. Wang and Cho (2016) train sentence representations for use as context in language modeling. In addition, Ji et al. (2016) treat discourse relations between sentences as la- tent variables and show that this yields improve- ments in language modeling in an extension of the document-context model of Ji et al. (2015).
1705.00557#5
Discourse-Based Objectives for Fast Unsupervised Sentence Representation Learning
This work presents a novel objective function for the unsupervised training of neural network sentence encoders. It exploits signals from paragraph-level discourse coherence to train these models to understand text. Our objective is purely discriminative, allowing us to train models many times faster than was possible under prior methods, and it yields models which perform well in extrinsic evaluations.
http://arxiv.org/pdf/1705.00557
Yacine Jernite, Samuel R. Bowman, David Sontag
cs.CL, cs.LG, cs.NE, stat.ML
null
null
cs.CL
20170423
20170423
[]
1705.00557
6
Outside the context of representation learning, there has been a good deal of work in NLP on dis- course coherence, and on the particular tasks of sentence ordering and coherence scoring. Barzi- lay and Lapata (2008) provide thorough coverage of this work. # 3 Discourse Inspired Objectives In this work, we propose three objective functions for use over paragraphs extracted from unlabeled text. Each captures a different aspect of discourse coherence and together the three can be used to train a single encoder to extract broadly useful sen- tence representations. Binary Ordering of Sentences Many coher- ence relations have an inherent direction. For ex- ample, if S1 is an elaboration of S0, S0 is not generally an elaboration of S1. Thus, being able to identify these coherence relations implies an ability to recover the original order of the sen- tences. Our first task, which we call ORDER, con- sists in taking pairs of adjacent sentences from text data, switching their order with probability 0.5, and training a model to decide whether they have been switched. Table 1 provides some examples of # Context No, not really. I had some ideas, some plans. But I never even caught sight of them. # Candidate Successors
1705.00557#6
Discourse-Based Objectives for Fast Unsupervised Sentence Representation Learning
This work presents a novel objective function for the unsupervised training of neural network sentence encoders. It exploits signals from paragraph-level discourse coherence to train these models to understand text. Our objective is purely discriminative, allowing us to train models many times faster than was possible under prior methods, and it yields models which perform well in extrinsic evaluations.
http://arxiv.org/pdf/1705.00557
Yacine Jernite, Samuel R. Bowman, David Sontag
cs.CL, cs.LG, cs.NE, stat.ML
null
null
cs.CL
20170423
20170423
[]
1705.00557
7
# Context No, not really. I had some ideas, some plans. But I never even caught sight of them. # Candidate Successors 1. There’s nothing I can do that compares that. 2. Then one day Mister Edwards saw me. 3. I drank and that was about all I did. 4. And anyway, God’s getting his revenge now. 5. He offered me a job and somewhere to sleep. # Table 2: The NEXT objective. Sentence Pair Label He had a point. For good measure, I pouted. RETURN (Still) It doesn’t hurt at all. It’s exhilarating. STRENGTHEN (In fact) The waterwheel hammered on. There was silence. CONTRAST (Otherwise) Table 3: The CONJUNCTION objective. Discourse relation labels are shown with the text from which they were derived. this task, along with the kind of coherence relation that we assume to be involved. It should be noted that since some of these relations are unordered, it is not always possible to recover the original order based on discourse coherence alone (see e.g. the flowers / birds example).
1705.00557#7
Discourse-Based Objectives for Fast Unsupervised Sentence Representation Learning
This work presents a novel objective function for the unsupervised training of neural network sentence encoders. It exploits signals from paragraph-level discourse coherence to train these models to understand text. Our objective is purely discriminative, allowing us to train models many times faster than was possible under prior methods, and it yields models which perform well in extrinsic evaluations.
http://arxiv.org/pdf/1705.00557
Yacine Jernite, Samuel R. Bowman, David Sontag
cs.CL, cs.LG, cs.NE, stat.ML
null
null
cs.CL
20170423
20170423
[]
1705.00557
8
Next Sentence Many coherence relations are transitive by nature, so that any two sentences from the same paragraph will exhibit some co- herence. However, two adjacent sentences will generally be more coherent than two more dis- tant ones. This leads us to formulate the NEXT task: given the first three sentences of a paragraph and a set of five candidate sentences from later in the paragraph, the model must decide which can- didate immediately follows the initial three in the source text. Table 2 presents an example of such a task: candidates 2 and 3 are coherent with the third sentence of the paragraph, but the elaboration (3) takes precedence over the progression (2). Conjunction Prediction Finally, information about the coherence relation between two sen- tences is sometimes apparent in the text (Milt- sakaki et al., 2004): this is the case whenever the second sentence starts with a conjunction phrase. To form the CONJUNCTION objective, we create a list of conjunction phrases and group them into nine categories (see supplementary material). We then extract from our source text all pairs of sen- tences where the second starts with one of the listed conjunctions, give the system the pair with- out the phrase, and train it to recover the conjunc- tion category. Table 3 provides examples. # 4 Experiments
1705.00557#8
Discourse-Based Objectives for Fast Unsupervised Sentence Representation Learning
This work presents a novel objective function for the unsupervised training of neural network sentence encoders. It exploits signals from paragraph-level discourse coherence to train these models to understand text. Our objective is purely discriminative, allowing us to train models many times faster than was possible under prior methods, and it yields models which perform well in extrinsic evaluations.
http://arxiv.org/pdf/1705.00557
Yacine Jernite, Samuel R. Bowman, David Sontag
cs.CL, cs.LG, cs.NE, stat.ML
null
null
cs.CL
20170423
20170423
[]
1705.00557
9
# 4 Experiments In this section, we introduce our training data and methods, present qualitative results and compar- isons among our three objectives, and close with quantitative comparisons with related work. Experimental Setup We train our models on a combination of data from BookCorpus (Zhu et al., 2015), the Gutenberg project (Stroube, 2003), and Wikipedia. After sentence and word tokenization (with NLTK; Bird, 2006) and lower-casing, we identify all paragraphs longer than 8 sentences and extract a NEXT example from each, as well as pairs of sentences for the ORDER and CONJUNCTION tasks. This gives us 40M examples for ORDER, 1.4M for CONJUNCTION, and 4.1M for NEXT. Despite having recently become a standard dataset for unsupervised learning, BookCorpus does not exhibit sufficiently rich discourse struc- ture to allow our model to fully succeed—in par- ticular, some of the conjunction categories are severely under-represented. Because of this, we choose to train our models on text from all three sources. While this precludes a strict apples-to- apples comparison with other published results, our goal in extrinsic evaluation is simply to show that our method makes it possible to learn useful representations quickly, rather than to demonstrate the superiority of our learning technique given fixed data and unlimited time.
1705.00557#9
Discourse-Based Objectives for Fast Unsupervised Sentence Representation Learning
This work presents a novel objective function for the unsupervised training of neural network sentence encoders. It exploits signals from paragraph-level discourse coherence to train these models to understand text. Our objective is purely discriminative, allowing us to train models many times faster than was possible under prior methods, and it yields models which perform well in extrinsic evaluations.
http://arxiv.org/pdf/1705.00557
Yacine Jernite, Samuel R. Bowman, David Sontag
cs.CL, cs.LG, cs.NE, stat.ML
null
null
cs.CL
20170423
20170423
[]
1705.00557
10
We consider three sentence encoding models: a simple 1024D sum-of-Words (CBOW) encod- ing, a 1024D GRU recurrent neural network (Cho et al., 2014), and a 512D bidirectional GRU RNN (BiGRU). All three use FastText (Joulin et al., 2016) pre-trained word embeddings2 to which we apply a Highway transformation (Srivastava et al., 2015). The encoders are trained jointly with three bilinear classifiers for the three objectives (for the NEXT examples, the three context sentences are encoded separately and their representations are 2https://github.com/facebookresearch/fastText/ blob/master/pretrained-vectors.md CONJUNCTION ORDER NEXT CBOW joint GRU joint BiGRU joint 42.8 39.5 45.1 56.6 54.3 58.3 27.7 25.9 30.2 BiGRU single 45.5 57.1 29.3 Table 4: Intrinsic evaluation results. Grant laughed and complied with the suggestion. Pauline stood for a moment in complete bewilderment. Her eyes narrowed on him, considering. Helena felt her face turn red hot. Her face remained expressionless as dough. Table 5: The nearest neighbors for a sentence.
1705.00557#10
Discourse-Based Objectives for Fast Unsupervised Sentence Representation Learning
This work presents a novel objective function for the unsupervised training of neural network sentence encoders. It exploits signals from paragraph-level discourse coherence to train these models to understand text. Our objective is purely discriminative, allowing us to train models many times faster than was possible under prior methods, and it yields models which perform well in extrinsic evaluations.
http://arxiv.org/pdf/1705.00557
Yacine Jernite, Samuel R. Bowman, David Sontag
cs.CL, cs.LG, cs.NE, stat.ML
null
null
cs.CL
20170423
20170423
[]
1705.00557
11
Table 5: The nearest neighbors for a sentence. concatenated). We perform stochastic gradient de- scent with AdaGrad (Duchi et al., 2011), subsam- pling CONJUNCTION and NEXT by a factor of 4 and 6 respectively (chosen using held-out accu- racy averaged over all three tasks on held out data after training on 1M examples). In this setting, the BiGRU model takes 8 hours to see all of the ex- amples from the BookCorpus dataset at least once. For ease of comparison, we train all three models for exactly 8 hours. Intrinsic and Qualitative Evaluation Table 4 compares the performance of different training regimes along two axes: encoder architecture and whether we train one model per task or one joint model. As expected, the more complex bidirec- tional GRU architecture is required to capture the appropriate sentence properties, although CBOW still manages to beat the simple GRU (the slow- est model), likely by virtue of its substantially faster speed, and correspondingly greater number of training epochs. Joint training does appear to be effective, as both the ORDER and NEXT tasks ben- efit from the information provided by CONJUNC- TION. Early experiments on the external evalua- tion also show that the joint BiGRU model sub- stantially outperforms each single model.
1705.00557#11
Discourse-Based Objectives for Fast Unsupervised Sentence Representation Learning
This work presents a novel objective function for the unsupervised training of neural network sentence encoders. It exploits signals from paragraph-level discourse coherence to train these models to understand text. Our objective is purely discriminative, allowing us to train models many times faster than was possible under prior methods, and it yields models which perform well in extrinsic evaluations.
http://arxiv.org/pdf/1705.00557
Yacine Jernite, Samuel R. Bowman, David Sontag
cs.CL, cs.LG, cs.NE, stat.ML
null
null
cs.CL
20170423
20170423
[]
1705.00557
12
Table 5 and the supplementary material show nearest neighbors in the trained BiGRU’s repre- sentation space for a random set of seed sentences. We select neighbors from among 400k held-out sentences. The encoder appears to be especially sensitive to high-level syntactic structure. Extrinsic Evaluation We evaluate the quality of the encoder learned by our system, which we call DiscSent, by using the sentence representa- tions it produces in a variety of sentence classifiModel FastSent1 FastSent+AE1 Time MSRP TREC SUBJ ≈13h 72.2 71.2 76.8 80.4 88.7 88.8 SDAE1 SDAE+embed1 192h 76.4 73.7 77.6 78.4 89.3 90.8 SkipT biGRU2 SkipT GRU2 SkipT+feats2 336h 71.2 73.0 75.8 89.4 91.4 92.2 92.5 92.1 93.6 Ordering model3 Ordering+embed3 +embed+SkipT3 48h 72.3 74.0 74.9 – – – – – – DiscSent biGRU DiscSent+unigram DiscSent+embed 8h 71.6 72.5 75.0 81.0 87.9 87.2 88.6 92.7 93.0
1705.00557#12
Discourse-Based Objectives for Fast Unsupervised Sentence Representation Learning
This work presents a novel objective function for the unsupervised training of neural network sentence encoders. It exploits signals from paragraph-level discourse coherence to train these models to understand text. Our objective is purely discriminative, allowing us to train models many times faster than was possible under prior methods, and it yields models which perform well in extrinsic evaluations.
http://arxiv.org/pdf/1705.00557
Yacine Jernite, Samuel R. Bowman, David Sontag
cs.CL, cs.LG, cs.NE, stat.ML
null
null
cs.CL
20170423
20170423
[]
1705.00557
13
Table 6: Text classification results, including train- ing time. +embed lines combine the sentence en- coder output with the sum of the pretrained word embeddings for the sentence. +unigram lines do so using embeddings learned for each target task without pretraining. +feats varies by task. Ref- erences: 1Hill et al. (2016) 2Kiros et al. (2015) 3Logeswaran et al. (2016) cation tasks. We follow the settings of Kiros et al. (2015) on paraphrase detection (MSRP; Dolan et al., 2004), subjectivity evaluation (SUBJ; Pang and Lee, 2004) and question classification (TREC; Voorhees, 2001). Overall, our system performs comparably with the SDAE and Skip Thought approaches with a drastically shorter training time. Our system also compares favorably to the similar discourse- inspired method of Logeswaran et al. (2016), achieving similar results on MSRP in a sixth of their training time. # 5 Conclusion
1705.00557#13
Discourse-Based Objectives for Fast Unsupervised Sentence Representation Learning
This work presents a novel objective function for the unsupervised training of neural network sentence encoders. It exploits signals from paragraph-level discourse coherence to train these models to understand text. Our objective is purely discriminative, allowing us to train models many times faster than was possible under prior methods, and it yields models which perform well in extrinsic evaluations.
http://arxiv.org/pdf/1705.00557
Yacine Jernite, Samuel R. Bowman, David Sontag
cs.CL, cs.LG, cs.NE, stat.ML
null
null
cs.CL
20170423
20170423
[]
1705.00557
14
# 5 Conclusion In this work, we introduce three new training ob- jectives for unsupervised sentence representation learning inspired by the notion of discourse coher- ence, and use them to train a sentence represen- tation system in competitive time, from 6 to over 40 times shorter than comparable methods, while obtaining comparable results on external evalua- tions tasks. We hope that the tasks that we intro- duce in this paper will prompt further research into discourse understanding with neural networks, as well as into strategies for unsupervised learning that will make it possible to use unlabeled data to train and refine a broader range of models for lan- guage understanding tasks. # References Ondrej Bajgar, Rudolf Kadlec, and Jan Kleindienst. 2016. Embracing data abundance: BookTest dataset for reading comprehension. CoRR abs/1610.00956. Regina Barzilay and Mirella Lapata. 2008. Modeling local coherence: An entity-based approach. Compu- tational Linguistics 34. Steven Bird. 2006. NLTK: the natural language toolkit. In ACL 2006, Sydney, Australia.
1705.00557#14
Discourse-Based Objectives for Fast Unsupervised Sentence Representation Learning
This work presents a novel objective function for the unsupervised training of neural network sentence encoders. It exploits signals from paragraph-level discourse coherence to train these models to understand text. Our objective is purely discriminative, allowing us to train models many times faster than was possible under prior methods, and it yields models which perform well in extrinsic evaluations.
http://arxiv.org/pdf/1705.00557
Yacine Jernite, Samuel R. Bowman, David Sontag
cs.CL, cs.LG, cs.NE, stat.ML
null
null
cs.CL
20170423
20170423
[]
1705.00557
15
Steven Bird. 2006. NLTK: the natural language toolkit. In ACL 2006, Sydney, Australia. Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bah- danau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder ap- proaches. In EMNLP 2014, Doha, Qatar. Semi- supervised sequence learning. In NIPS 2015, Mon- treal, Quebec, Canada. Bill Dolan, Chris Quirk, and Chris Brockett. 2004. Unsupervised construction of large paraphrase cor- pora: Exploiting massively parallel news sources. In COLING 2004, Geneva, Switzerland. John C. Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning Journal of Machine and stochastic optimization. Learning Research 12. Zhe Gan, Yunchen Pu, Ricardo Henao, Chunyuan Li, Xiaodong He, and Lawrence Carin. 2016. Unsuper- vised learning of sentence representations using con- volutional neural networks. CoRR abs/1611.07897. Felix Hill, Kyunghyun Cho, and Anna Korhonen. 2016. Learning distributed representations of sentences from unlabelled data. In NAACL 2016, San Diego, California, USA.
1705.00557#15
Discourse-Based Objectives for Fast Unsupervised Sentence Representation Learning
This work presents a novel objective function for the unsupervised training of neural network sentence encoders. It exploits signals from paragraph-level discourse coherence to train these models to understand text. Our objective is purely discriminative, allowing us to train models many times faster than was possible under prior methods, and it yields models which perform well in extrinsic evaluations.
http://arxiv.org/pdf/1705.00557
Yacine Jernite, Samuel R. Bowman, David Sontag
cs.CL, cs.LG, cs.NE, stat.ML
null
null
cs.CL
20170423
20170423
[]
1705.00557
16
Jerry R. Hobbs. 1979. Coherence and coreference. Cognitive Science 3. Yangfeng Ji, Trevor Cohn, Lingpeng Kong, Chris Dyer, and Jacob Eisenstein. 2015. Document context lan- guage models. CoRR abs/1511.03962. Yangfeng Ji, Gholamreza Haffari, and Jacob Eisen- stein. 2016. A latent variable recurrent neural net- work for discourse relation language models. CoRR abs/1603.01913. Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2016. Bag of tricks for efficient text classification. CoRR abs/1607.01759. Yoon Kim. 2014. Convolutional neural networks for In EMNLP 2014, Doha, sentence classification. Qatar. Yoon Kim, Yacine Jernite, David Sontag, and Alexan- der M. Rush. 2016. Character-aware neural lan- In AAAI 2016, Phoenix, Arizona, guage models. USA.
1705.00557#16
Discourse-Based Objectives for Fast Unsupervised Sentence Representation Learning
This work presents a novel objective function for the unsupervised training of neural network sentence encoders. It exploits signals from paragraph-level discourse coherence to train these models to understand text. Our objective is purely discriminative, allowing us to train models many times faster than was possible under prior methods, and it yields models which perform well in extrinsic evaluations.
http://arxiv.org/pdf/1705.00557
Yacine Jernite, Samuel R. Bowman, David Sontag
cs.CL, cs.LG, cs.NE, stat.ML
null
null
cs.CL
20170423
20170423
[]
1705.00557
17
Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S. Zemel, Raquel Urtasun, Antonio Tor- ralba, and Sanja Fidler. 2015. Skip-thought vectors. In NIPS 2015, Montreal, Quebec, Canada. Jiwei Li and Eduard H. Hovy. 2014. A model of coher- ence based on distributed sentence representation. In EMNLP 2014, Doha, Qatar. Jiwei Li and Dan Jurafsky. 2016. Neural net mod- els for open-domain discourse coherence. CoRR abs/1606.01545. and Dragomir R. Radev. 2016. Sentence ordering using recurrent neural networks. CoRR abs/1611.02654. Tomas Mikolov, Martin Karafi´at, Luk´as Burget, Jan Cernock´y, and Sanjeev Khudanpur. 2010. Recur- rent neural network based language model. In IN- TERSPEECH 2010, Makuhari, Chiba, Japan. Eleni Miltsakaki, Rashmi Prasad, Aravind K. Joshi, and Bonnie L. Webber. 2004. The penn discourse treebank. In LREC 2004, Lisbon, Portugal.
1705.00557#17
Discourse-Based Objectives for Fast Unsupervised Sentence Representation Learning
This work presents a novel objective function for the unsupervised training of neural network sentence encoders. It exploits signals from paragraph-level discourse coherence to train these models to understand text. Our objective is purely discriminative, allowing us to train models many times faster than was possible under prior methods, and it yields models which perform well in extrinsic evaluations.
http://arxiv.org/pdf/1705.00557
Yacine Jernite, Samuel R. Bowman, David Sontag
cs.CL, cs.LG, cs.NE, stat.ML
null
null
cs.CL
20170423
20170423
[]
1705.00557
18
Bo Pang and Lillian Lee. 2004. A sentimental edu- cation: Sentiment analysis using subjectivity sum- marization based on minimum cuts. In ACL 2004, Barcelona, Spain.. pages 271–278. Prajit Ramachandran, Peter J. Liu, and Quoc V. Le. 2016. Unsupervised pretraining for sequence to se- quence learning. CoRR abs/1611.02683. Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sen- tence summarization. In EMNLP 2015, Lisbon, Por- tugal. Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, Christopher Potts, et al. 2013. Recursive deep mod- els for semantic compositionality over a sentiment In EMNLP 2013, Seattle, Washington, treebank. USA. Rupesh Kumar Srivastava, Klaus Greff, and J¨urgen Schmidhuber. 2015. Highway networks. CoRR abs/1505.00387. Bryan Stroube. 2003. Literary freedom: project guten- berg. ACM Crossroads 10.
1705.00557#18
Discourse-Based Objectives for Fast Unsupervised Sentence Representation Learning
This work presents a novel objective function for the unsupervised training of neural network sentence encoders. It exploits signals from paragraph-level discourse coherence to train these models to understand text. Our objective is purely discriminative, allowing us to train models many times faster than was possible under prior methods, and it yields models which perform well in extrinsic evaluations.
http://arxiv.org/pdf/1705.00557
Yacine Jernite, Samuel R. Bowman, David Sontag
cs.CL, cs.LG, cs.NE, stat.ML
null
null
cs.CL
20170423
20170423
[]
1705.00557
19
Bryan Stroube. 2003. Literary freedom: project guten- berg. ACM Crossroads 10. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural net- works. In NIPS 2014, Montreal, Quebec, Canada. Ellen M. Voorhees. 2001. Overview of the TREC 2001 question answering track. In TREC 2001, Gaithers- burg, Maryland, USA. Tian Wang and Kyunghyun Cho. 2016. Larger-context language modelling with recurrent neural network. In ACL 2016, Berlin, Germany. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alexander J. Smola, and Eduard H. Hovy. 2016. Hi- erarchical attention networks for document classi- In NAACL 2016, San Diego, California, fication. USA. Xiang Zhang, Junbo Jake Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In NIPS 2015, Montreal, Quebec, Canada.
1705.00557#19
Discourse-Based Objectives for Fast Unsupervised Sentence Representation Learning
This work presents a novel objective function for the unsupervised training of neural network sentence encoders. It exploits signals from paragraph-level discourse coherence to train these models to understand text. Our objective is purely discriminative, allowing us to train models many times faster than was possible under prior methods, and it yields models which perform well in extrinsic evaluations.
http://arxiv.org/pdf/1705.00557
Yacine Jernite, Samuel R. Bowman, David Sontag
cs.CL, cs.LG, cs.NE, stat.ML
null
null
cs.CL
20170423
20170423
[]
1705.00557
20
Yukun Zhu, Ryan Kiros, Richard S. Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In ICCV 2015, Santiago, Chile. # Supplement to: Discourse-Based Objectives for Fast Unsupervised Sentence Representation Learning Table 7 lists the conjunction phrases and groupings used. Table 8 (next page) shows the Euclidean nearest neighbors of a sample of sentences in our representation space. again also besides finally further addition furthermore moreover in addition contrast anyway however instead nevertheless otherwise contrarily conversely nonetheless in contrast rather time meanwhile next then now thereafter result accordingly consequently hence thus therefore specific namely specifically notably that is for example compare likewise similarly strengthen indeed in fact return still recognize undoubtedly certainly Table 7: Grouping of conjunctions. His main influences are Al Di, Jimi Hendrix, Tony, JJ Cale, Malmsteen and Paul Gilbert.
1705.00557#20
Discourse-Based Objectives for Fast Unsupervised Sentence Representation Learning
This work presents a novel objective function for the unsupervised training of neural network sentence encoders. It exploits signals from paragraph-level discourse coherence to train these models to understand text. Our objective is purely discriminative, allowing us to train models many times faster than was possible under prior methods, and it yields models which perform well in extrinsic evaluations.
http://arxiv.org/pdf/1705.00557
Yacine Jernite, Samuel R. Bowman, David Sontag
cs.CL, cs.LG, cs.NE, stat.ML
null
null
cs.CL
20170423
20170423
[]
1705.00557
21
Table 7: Grouping of conjunctions. His main influences are Al Di, Jimi Hendrix, Tony, JJ Cale, Malmsteen and Paul Gilbert. The album features guest appearances from Kendrick Lamar, Schoolboy Q, 2 Chainz, Drake, Big. The production had original live rock, blues, jazz, punk, and music composed and arranged by Steve and Diane Gioia. There are 6 real drivers in the game: Gilles, Richard Burns, Carlos Sainz, Philippe, Piero, and Tommi. Other rappers that did include Young Jeezy, Lil Wayne, Freddie Gibbs, Emilio Rojas, German rapper and Romeo Miller. Grant laughed and complied with the suggestion. Pauline stood for a moment in complete bewilderment. Her eyes narrowed on him, considering. Helena felt her face turn red hot. Her face remained expressionless as dough. Items can be selected specifically to represent characteristics that are not as well represented in natural language.
1705.00557#21
Discourse-Based Objectives for Fast Unsupervised Sentence Representation Learning
This work presents a novel objective function for the unsupervised training of neural network sentence encoders. It exploits signals from paragraph-level discourse coherence to train these models to understand text. Our objective is purely discriminative, allowing us to train models many times faster than was possible under prior methods, and it yields models which perform well in extrinsic evaluations.
http://arxiv.org/pdf/1705.00557
Yacine Jernite, Samuel R. Bowman, David Sontag
cs.CL, cs.LG, cs.NE, stat.ML
null
null
cs.CL
20170423
20170423
[]
1705.00557
22
Items can be selected specifically to represent characteristics that are not as well represented in natural language. Cache manifests can also use relative paths or even absolute urls as shown below. Locales can be used to translate into different languages, or variations of text, which are replaced by reference. Nouns can only be inflected for the possessive, in which case a prefix is added. Ratios are commonly used to compare banks, because most assets and liabilities of banks are constantly valued at market values. A group of generals thus created a secret organization, the united officers’ group, in order to oust Castillo from power. The home in Massachusetts is controlled by a private society organized for the purpose, with a board of fifteen trustees in charge. A group of ten trusted servants men from the family were assigned to search the eastern area of the island in the area. The city is divided into 144 administrative wards that are grouped into 15 boroughs. each of these wards elects a councillor. From 1993 to 1994 she served as US ambassador to the United Nations commission on the status of women. As a result of this performance, Morelli’s play had become a polarizing issue amongst Nittany Lion fans.
1705.00557#22
Discourse-Based Objectives for Fast Unsupervised Sentence Representation Learning
This work presents a novel objective function for the unsupervised training of neural network sentence encoders. It exploits signals from paragraph-level discourse coherence to train these models to understand text. Our objective is purely discriminative, allowing us to train models many times faster than was possible under prior methods, and it yields models which perform well in extrinsic evaluations.
http://arxiv.org/pdf/1705.00557
Yacine Jernite, Samuel R. Bowman, David Sontag
cs.CL, cs.LG, cs.NE, stat.ML
null
null
cs.CL
20170423
20170423
[]
1705.00557
23
As a result of this performance, Morelli’s play had become a polarizing issue amongst Nittany Lion fans. In the end, Molly was deemed to have more potential, eliminating Jaclyn despite having a stellar portfolio. As a result of the Elway connection, Erickson spent time that year learning about the offense with Jack. As a result of the severe response of the czarist authorities to this insurrection, had to leave Poland. Another unwelcome note is struck by the needlessly aggressive board on the museum which has already been mentioned. # Zayd Ibn reported , “we used to record the Quran from parchments in the presence of the messenger of god.” Daniel Pipes says that “primarily through “the protocols of the Elders of Zion”, the whites spread these charges to [. . . ]” Sam wrote in “” (1971) that Howard’s fiction was “a kind of wild West in the lands of unbridled fantasy.” said , the chancellor “elaborately fought for an European solution” in the refugee crisis, but this was “out of sight”. Robert , writing for “The New York Post”, states that, “in Mellie , the show has its most character [. . . ]”
1705.00557#23
Discourse-Based Objectives for Fast Unsupervised Sentence Representation Learning
This work presents a novel objective function for the unsupervised training of neural network sentence encoders. It exploits signals from paragraph-level discourse coherence to train these models to understand text. Our objective is purely discriminative, allowing us to train models many times faster than was possible under prior methods, and it yields models which perform well in extrinsic evaluations.
http://arxiv.org/pdf/1705.00557
Yacine Jernite, Samuel R. Bowman, David Sontag
cs.CL, cs.LG, cs.NE, stat.ML
null
null
cs.CL
20170423
20170423
[]
1705.00557
24
Many “Crimean Goths” were Greek speakers and many Byzantine citizens were settled in the region called [. . . ] The personal name of “Andes”, popular among the Illyrians of southern Pannonia and much of Northern Dalmatia [. . . ] is identified by the Chicano as the first settlement of the people in North America before their Southern migration [. . . ] The range of “H.” stretches across the Northern and Western North America as well as across Europe [. . . ] The name “Dauphin river” actually refers to two closely tied communities; bay and some members of Dauphin river first nation. She smiled and he smiled in return. He shook his head and smiled broadly. He laughed and shook his head. He gazed at her in amazement. She sighed and shook her head at her foolishness. The jury returned a verdict of not in the Floyd cox case, in which he was released immediately.
1705.00557#24
Discourse-Based Objectives for Fast Unsupervised Sentence Representation Learning
This work presents a novel objective function for the unsupervised training of neural network sentence encoders. It exploits signals from paragraph-level discourse coherence to train these models to understand text. Our objective is purely discriminative, allowing us to train models many times faster than was possible under prior methods, and it yields models which perform well in extrinsic evaluations.
http://arxiv.org/pdf/1705.00557
Yacine Jernite, Samuel R. Bowman, David Sontag
cs.CL, cs.LG, cs.NE, stat.ML
null
null
cs.CL
20170423
20170423
[]
1705.00557
25
The jury returned a verdict of not in the Floyd cox case, in which he was released immediately. The match lasted only 1 minute and 5 seconds, and was the second quickest bout of the division. His results qualified him for the Grand Prix final, in which he placed 6th overall. The judge stated that the prosecution had until march 1, 2012, to file charges. In November, he reached the final of the Ruhr Open, but lost 4˘20130 against Murphy. # Here was at least a slight reprieve. The monsters seemed to be frozen in time. This had an impact on him. That was all the sign he needed. So this was disturbing as hell. Table 8: Nearest neighbor examples
1705.00557#25
Discourse-Based Objectives for Fast Unsupervised Sentence Representation Learning
This work presents a novel objective function for the unsupervised training of neural network sentence encoders. It exploits signals from paragraph-level discourse coherence to train these models to understand text. Our objective is purely discriminative, allowing us to train models many times faster than was possible under prior methods, and it yields models which perform well in extrinsic evaluations.
http://arxiv.org/pdf/1705.00557
Yacine Jernite, Samuel R. Bowman, David Sontag
cs.CL, cs.LG, cs.NE, stat.ML
null
null
cs.CL
20170423
20170423
[]
1704.06369
0
7 1 0 2 l u J 6 2 ] V C . s c [ 4 v 9 6 3 6 0 . 4 0 7 1 : v i X r a # NormFace: L2 Hypersphere Embedding for Face Verification Feng Wang∗ University of Electronic Science and Technology of China 2006 Xiyuan Ave. Chengdu, Sichuan 611731 [email protected] Xiang Xiang Johns Hopkins University 3400 N. Charles St. Baltimore, Maryland 21218 [email protected] Jian Cheng University of Electronic Science and Technology of China 2006 Xiyuan Ave. Chengdu, Sichuan 611731 [email protected] Alan L. Yuille Johns Hopkins University 3400 N. Charles St. Baltimore, Maryland 21218 [email protected]
1704.06369#0
NormFace: L2 Hypersphere Embedding for Face Verification
Thanks to the recent developments of Convolutional Neural Networks, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differentiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve performance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%. Codes and models are released on https://github.com/happynear/NormFace
http://arxiv.org/pdf/1704.06369
Feng Wang, Xiang Xiang, Jian Cheng, Alan L. Yuille
cs.CV
camera-ready version
null
cs.CV
20170421
20170726
[ { "id": "1502.03167" }, { "id": "1611.08976" }, { "id": "1703.09507" }, { "id": "1511.02683" }, { "id": "1706.04264" }, { "id": "1607.06450" }, { "id": "1702.06890" } ]
1704.06440
0
8 1 0 2 t c O 4 1 ] G L . s c [ 4 v 0 4 4 6 0 . 4 0 7 1 : v i X r a # Equivalence Between Policy Gradients and Soft Q-Learning John Schulman1, Xi Chen1,2, and Pieter Abbeel1,2 # 1OpenAI 2UC Berkeley, EECS Dept. # joschu, peter, pieter { # @openai.com } Abstract {joschu, peter, pieter} @openai.com Two of the leading approaches for model-free reinforcement learning are policy gradient methods and Q-learning methods. Q-learning methods can be effective and sample-efficient when they work, however, it is not well-understood why they work, since empirically, the Q-values they estimate are very inaccurate. A partial explanation may be that Q-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between Q-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that “soft” (entropy-regularized) Q-learning is exactly equivalent to a policy gradient method. We also point out a connection between Q-learning methods and natural policy gradient methods.
1704.06440#0
Equivalence Between Policy Gradients and Soft Q-Learning
Two of the leading approaches for model-free reinforcement learning are policy gradient methods and $Q$-learning methods. $Q$-learning methods can be effective and sample-efficient when they work, however, it is not well-understood why they work, since empirically, the $Q$-values they estimate are very inaccurate. A partial explanation may be that $Q$-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between $Q$-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that "soft" (entropy-regularized) $Q$-learning is exactly equivalent to a policy gradient method. We also point out a connection between $Q$-learning methods and natural policy gradient methods. Experimentally, we explore the entropy-regularized versions of $Q$-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a $Q$-learning method that closely matches the learning dynamics of A3C without using a target network or $\epsilon$-greedy exploration schedule.
http://arxiv.org/pdf/1704.06440
John Schulman, Xi Chen, Pieter Abbeel
cs.LG
null
null
cs.LG
20170421
20181014
[ { "id": "1602.01783" }, { "id": "1702.08892" }, { "id": "1702.08165" }, { "id": "1611.01626" }, { "id": "1511.06581" }, { "id": "1512.08562" }, { "id": "1506.02438" } ]
1704.06369
1
Alan L. Yuille Johns Hopkins University 3400 N. Charles St. Baltimore, Maryland 21218 [email protected] ABSTRACT Thanks to the recent developments of Convolutional Neural Net- works, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differen- tiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve per- formance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%. [19] and so on. In the field of face verification, CNNs have already surpassed humans’ abilities on several benchmarks[20, 33].
1704.06369#1
NormFace: L2 Hypersphere Embedding for Face Verification
Thanks to the recent developments of Convolutional Neural Networks, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differentiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve performance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%. Codes and models are released on https://github.com/happynear/NormFace
http://arxiv.org/pdf/1704.06369
Feng Wang, Xiang Xiang, Jian Cheng, Alan L. Yuille
cs.CV
camera-ready version
null
cs.CV
20170421
20170726
[ { "id": "1502.03167" }, { "id": "1611.08976" }, { "id": "1703.09507" }, { "id": "1511.02683" }, { "id": "1706.04264" }, { "id": "1607.06450" }, { "id": "1702.06890" } ]
1704.06440
1
Experimentally, we explore the entropy-regularized versions of Q-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a Q-learning method that closely matches the learning dynamics of A3C without using a target network or e-greedy exploration schedule. # 1 Introduction
1704.06440#1
Equivalence Between Policy Gradients and Soft Q-Learning
Two of the leading approaches for model-free reinforcement learning are policy gradient methods and $Q$-learning methods. $Q$-learning methods can be effective and sample-efficient when they work, however, it is not well-understood why they work, since empirically, the $Q$-values they estimate are very inaccurate. A partial explanation may be that $Q$-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between $Q$-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that "soft" (entropy-regularized) $Q$-learning is exactly equivalent to a policy gradient method. We also point out a connection between $Q$-learning methods and natural policy gradient methods. Experimentally, we explore the entropy-regularized versions of $Q$-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a $Q$-learning method that closely matches the learning dynamics of A3C without using a target network or $\epsilon$-greedy exploration schedule.
http://arxiv.org/pdf/1704.06440
John Schulman, Xi Chen, Pieter Abbeel
cs.LG
null
null
cs.LG
20170421
20181014
[ { "id": "1602.01783" }, { "id": "1702.08892" }, { "id": "1702.08165" }, { "id": "1611.01626" }, { "id": "1511.06581" }, { "id": "1512.08562" }, { "id": "1506.02438" } ]
1704.06369
2
[19] and so on. In the field of face verification, CNNs have already surpassed humans’ abilities on several benchmarks[20, 33]. The most common pipeline for a face verification application involves face detection, facial landmark detection, face alignment, feature extraction, and finally feature comparison. In the feature comparison step, the cosine similarity or equivalently Lz normalized Euclidean distance is used to measure the similarities between features. The cosine similarity Go) isa similarity measure which is independent of magnitude. It can be seen as the normalized version of inner-product of two vectors. But in practice the inner product without normalization is the most widely-used similarity measure when training a CNN classification models [12, 29, 32]. In other words, the similarity or distance metric used during training is different from that used in the testing phase. To our knowledge, no researcher in the face verification community has clearly explained why the features should be normalized to calculate the similarity in the testing phase. Feature normalization is treated only as a trick to promote the performance during testing. CCS CONCEPTS • Computing methodologies → Object identification; Super- vised learning by classification; Neural networks; Regulariza- tion;
1704.06369#2
NormFace: L2 Hypersphere Embedding for Face Verification
Thanks to the recent developments of Convolutional Neural Networks, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differentiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve performance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%. Codes and models are released on https://github.com/happynear/NormFace
http://arxiv.org/pdf/1704.06369
Feng Wang, Xiang Xiang, Jian Cheng, Alan L. Yuille
cs.CV
camera-ready version
null
cs.CV
20170421
20170726
[ { "id": "1502.03167" }, { "id": "1611.08976" }, { "id": "1703.09507" }, { "id": "1511.02683" }, { "id": "1706.04264" }, { "id": "1607.06450" }, { "id": "1702.06890" } ]
1704.06440
2
Policy gradient methods (PG) and Q-learning (QL) methods perform updates that are qualitatively similar. In both cases, if the return following an action at is high, then that action is reinforced: in policy gradient st) is increased; whereas in Q-learning methods, the Q-value Q(st, at) is methods, the probability π(at increased. The connection becomes closer when we add entropy regularization to these algorithms. With an entropy cost added to the returns, the optimal policy has the form π(a exp(Q(s, a)); hence policy gradient methods solve for the optimal Q-function, up to an additive constant (Ziebart [2010]). O’Donoghue et al. [2016] also discuss the connection between the fixed points and updates of PG and QL methods, though the discussion of fixed points is restricted to the tabular setting, and the discussion comparing updates is informal and shows an approximate equivalence. Going beyond past work, this paper shows that under appropriate conditions, the gradient of the loss function used in n-step Q-learning is equal to the gradient of the loss used in an n-step policy gradient method, including a squared-error term
1704.06440#2
Equivalence Between Policy Gradients and Soft Q-Learning
Two of the leading approaches for model-free reinforcement learning are policy gradient methods and $Q$-learning methods. $Q$-learning methods can be effective and sample-efficient when they work, however, it is not well-understood why they work, since empirically, the $Q$-values they estimate are very inaccurate. A partial explanation may be that $Q$-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between $Q$-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that "soft" (entropy-regularized) $Q$-learning is exactly equivalent to a policy gradient method. We also point out a connection between $Q$-learning methods and natural policy gradient methods. Experimentally, we explore the entropy-regularized versions of $Q$-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a $Q$-learning method that closely matches the learning dynamics of A3C without using a target network or $\epsilon$-greedy exploration schedule.
http://arxiv.org/pdf/1704.06440
John Schulman, Xi Chen, Pieter Abbeel
cs.LG
null
null
cs.LG
20170421
20181014
[ { "id": "1602.01783" }, { "id": "1702.08892" }, { "id": "1702.08165" }, { "id": "1611.01626" }, { "id": "1511.06581" }, { "id": "1512.08562" }, { "id": "1506.02438" } ]
1704.06369
3
CCS CONCEPTS • Computing methodologies → Object identification; Super- vised learning by classification; Neural networks; Regulariza- tion; To illustrate this, we performed an experiment which compared the face features without normalization, i.e. using the unnormalized inner-product or Euclidean distance as the similarity measurement. The features were extracted from an online available model [36]1. We followed the standard protocol of unrestricted with labeled out- side data[9] and test the model on the Labeled Faces in the Wild (LFW) dataset[10]. The results are listed in Table 1. # KEYWORDS Face Verification, Metric Learning, Feature Normalization # Table 1: Effect of Feature Normalization 1 INTRODUCTION In recent years, Convolutional neural networks (CNNs) achieve state-of-the-art performance for various computer vision tasks, such as object recognition [12, 29, 32], detection [5], segmentation Similarity Inner-Product Euclidean Before Normalization After Normalization 98.27% 98.35% 98.98% 98.95% ∗Alan L. Yuille’s visiting student.
1704.06369#3
NormFace: L2 Hypersphere Embedding for Face Verification
Thanks to the recent developments of Convolutional Neural Networks, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differentiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve performance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%. Codes and models are released on https://github.com/happynear/NormFace
http://arxiv.org/pdf/1704.06369
Feng Wang, Xiang Xiang, Jian Cheng, Alan L. Yuille
cs.CV
camera-ready version
null
cs.CV
20170421
20170726
[ { "id": "1502.03167" }, { "id": "1611.08976" }, { "id": "1703.09507" }, { "id": "1511.02683" }, { "id": "1706.04264" }, { "id": "1607.06450" }, { "id": "1702.06890" } ]
1704.06440
3
the gradient of the loss function used in n-step Q-learning is equal to the gradient of the loss used in an n-step policy gradient method, including a squared-error term on the value function. Altogether, the update matches what is typically done in “actor-critic” policy gradient methods such as A3C, which explains why Mnih et al. [2016] obtained qualitatively similar results from policy gradients and n-step Q-learning.
1704.06440#3
Equivalence Between Policy Gradients and Soft Q-Learning
Two of the leading approaches for model-free reinforcement learning are policy gradient methods and $Q$-learning methods. $Q$-learning methods can be effective and sample-efficient when they work, however, it is not well-understood why they work, since empirically, the $Q$-values they estimate are very inaccurate. A partial explanation may be that $Q$-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between $Q$-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that "soft" (entropy-regularized) $Q$-learning is exactly equivalent to a policy gradient method. We also point out a connection between $Q$-learning methods and natural policy gradient methods. Experimentally, we explore the entropy-regularized versions of $Q$-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a $Q$-learning method that closely matches the learning dynamics of A3C without using a target network or $\epsilon$-greedy exploration schedule.
http://arxiv.org/pdf/1704.06440
John Schulman, Xi Chen, Pieter Abbeel
cs.LG
null
null
cs.LG
20170421
20181014
[ { "id": "1602.01783" }, { "id": "1702.08892" }, { "id": "1702.08165" }, { "id": "1611.01626" }, { "id": "1511.06581" }, { "id": "1512.08562" }, { "id": "1506.02438" } ]
1704.06369
4
∗Alan L. Yuille’s visiting student. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. MM ’17, October 23–27, 2017, Mountain View, CA, USA. © 2017 ACM. ISBN 978-1-4503-4906-2/17/10. . . $15.00 DOI: https://doi.org/10.1145/3123266.3123359 As shown in the table, feature normalization promoted the per- formance by about 0.6% ∼ 0.7%, which is a significant improvement since the accuracies are already above 98%. Feature normalization seems to be a crucial step to get good performance during testing. Noting that the normalization operation is differentiable, there is no reason that stops us importing this operation into the CNN model to perform end-to-end training.
1704.06369#4
NormFace: L2 Hypersphere Embedding for Face Verification
Thanks to the recent developments of Convolutional Neural Networks, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differentiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve performance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%. Codes and models are released on https://github.com/happynear/NormFace
http://arxiv.org/pdf/1704.06369
Feng Wang, Xiang Xiang, Jian Cheng, Alan L. Yuille
cs.CV
camera-ready version
null
cs.CV
20170421
20170726
[ { "id": "1502.03167" }, { "id": "1611.08976" }, { "id": "1703.09507" }, { "id": "1511.02683" }, { "id": "1706.04264" }, { "id": "1607.06450" }, { "id": "1702.06890" } ]
1704.06440
4
Section 2 uses the bandit setting to provide the reader with a simplified version of our main calculation. (The main calculation applies to the MDP setting.) Section 3 discusses the entropy-regularized formulation of RL, which is not original to this work, but is included for the reader’s convenience. Section 4 shows that the soft Q-learning loss gradient can be interpreted as a policy gradient term plus a baseline-error-gradient term, corresponding to policy gradient instantiations such as A3C [Mnih et al., 2016]. Section 5 draws a connection between QL methods that use batch updates or replay-buffers, and natural policy gradient methods. Some previous work on entropy regularized reinforcement learning (e.g., O’Donoghue et al. [2016], Nachum et al. [2017]) uses entropy bonuses, whereas we use a penalty on Kullback-Leibler (KL) diver- gence, which is a bit more general. However, in the text, we often refer to “entropy” terms; this refers to “relative entropy”, i.e., the KL divergence. # 2 Bandit Setting
1704.06440#4
Equivalence Between Policy Gradients and Soft Q-Learning
Two of the leading approaches for model-free reinforcement learning are policy gradient methods and $Q$-learning methods. $Q$-learning methods can be effective and sample-efficient when they work, however, it is not well-understood why they work, since empirically, the $Q$-values they estimate are very inaccurate. A partial explanation may be that $Q$-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between $Q$-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that "soft" (entropy-regularized) $Q$-learning is exactly equivalent to a policy gradient method. We also point out a connection between $Q$-learning methods and natural policy gradient methods. Experimentally, we explore the entropy-regularized versions of $Q$-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a $Q$-learning method that closely matches the learning dynamics of A3C without using a target network or $\epsilon$-greedy exploration schedule.
http://arxiv.org/pdf/1704.06440
John Schulman, Xi Chen, Pieter Abbeel
cs.LG
null
null
cs.LG
20170421
20181014
[ { "id": "1602.01783" }, { "id": "1702.08892" }, { "id": "1702.08165" }, { "id": "1611.01626" }, { "id": "1511.06581" }, { "id": "1512.08562" }, { "id": "1506.02438" } ]
1704.06369
5
# 1https://github.com/ydwen/caffe-face Tame >| wetgns| Training: | aligned ‘ne classification image inner- product CNN normalize thresholding Identity Testing: different Identity Figure 1: Pipeline of face verification model training and testing using a classification loss function. Previous works did not use the normalization after feature extraction dur- ing training. But in the testing phase, all methods used a normalized similarity, e.g. cosine, to compare two features. Some previous works[23, 28] successfully trained CNN models with the features being normalized in an end-to-end fashion. How- ever, both of them used the triplet loss, which needs to sample triplets of face images during training. It is difficult to train be- cause we usually need to implement hard mining algorithms to find non-trivial triplets[28]. Another route is to train a classification network using softmax loss[31, 38] and regularizations to limit the intra-class variance[16, 36]. Furthermore, some works combine the classification and metric learning loss functions together to train CNN models[31, 41]. All these methods that used classification loss functions, e.g. softmax loss, did not apply feature normalization, even though they all used normalized similarity measure, e.g. co- sine similarity, to get the confidence of judging two samples being of the same identity at testing phase(Figure 1).
1704.06369#5
NormFace: L2 Hypersphere Embedding for Face Verification
Thanks to the recent developments of Convolutional Neural Networks, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differentiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve performance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%. Codes and models are released on https://github.com/happynear/NormFace
http://arxiv.org/pdf/1704.06369
Feng Wang, Xiang Xiang, Jian Cheng, Alan L. Yuille
cs.CV
camera-ready version
null
cs.CV
20170421
20170726
[ { "id": "1502.03167" }, { "id": "1611.08976" }, { "id": "1703.09507" }, { "id": "1511.02683" }, { "id": "1706.04264" }, { "id": "1607.06450" }, { "id": "1702.06890" } ]
1704.06440
5
# 2 Bandit Setting Let’s consider a bandit problem with a discrete or continuous action space: at each timestep the agent a), where P is unknown to the agent. Let chooses an action a, and the reward r is sampled according to P (r | 1 r(a) =E|[r | a], and let m denote a policy, where 7(a) is the probability of action a. Then, the expected per- timestep reward of the policy 7 is Eavx [r] = 0, 7(a)r(a) or fda 7(a)r(a). Let’s suppose we are maximizing n(m), an entropy-regularized version of this objective: η(π) = Ea∼π,r [r] τ DKL [π π] (1) − where π is some “reference” policy, τ is a “temperature” parameter, and DKL is the Kullback-Leibler divergence. Note that the temperature τ can be eliminated by rescaling the rewards. However, we will leave it so that our calculations are checkable through dimensional analysis, and to make the temperature- dependence more explicit. First, let us calculate the policy π that maximizes η. We claim that η(π) is maximized by πB defined as
1704.06440#5
Equivalence Between Policy Gradients and Soft Q-Learning
Two of the leading approaches for model-free reinforcement learning are policy gradient methods and $Q$-learning methods. $Q$-learning methods can be effective and sample-efficient when they work, however, it is not well-understood why they work, since empirically, the $Q$-values they estimate are very inaccurate. A partial explanation may be that $Q$-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between $Q$-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that "soft" (entropy-regularized) $Q$-learning is exactly equivalent to a policy gradient method. We also point out a connection between $Q$-learning methods and natural policy gradient methods. Experimentally, we explore the entropy-regularized versions of $Q$-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a $Q$-learning method that closely matches the learning dynamics of A3C without using a target network or $\epsilon$-greedy exploration schedule.
http://arxiv.org/pdf/1704.06440
John Schulman, Xi Chen, Pieter Abbeel
cs.LG
null
null
cs.LG
20170421
20181014
[ { "id": "1602.01783" }, { "id": "1702.08892" }, { "id": "1702.08165" }, { "id": "1611.01626" }, { "id": "1511.06581" }, { "id": "1512.08562" }, { "id": "1506.02438" } ]
1704.06369
6
We did an experiment by normalizing both the features and the weights of the last inner-product layer to build a cosine layer in an ordinary CNN model. After sufficient iterations, the network still did not converge. After observing this phenomenon, we deeply dig into this problem. In this paper, we will find out the reason and propose methods to enable us to train the normalized features. To sum up, in this work, we analyze and answer the questions mentioned above about the feature normalization and the model training: (1) Why is feature normalization so efficient when comparing the CNN features trained by classification loss, especially for soft- max loss? (2) Why does directly optimizing the cosine similarity using soft- max loss cause the network to fail to converge? (3) How to optimize a cosine similarity when using softmax loss? (4) Since models with softmax loss fail to converge after normaliza- tion, are there any other loss functions suitable for normalized features? For the first question, we explain it through a property of softmax loss in Section 3.1. For the second and third questions, we provide a bound to describe the difficulty of using softmax loss to optimize a cosine similarity and propose using the scaled cosine similarity in Section 3.3. For the fourth question, we reformulate a set of loss functions in metric learning, such as contrastive loss and triplet loss to perform the classification task by introducing an ‘agent’
1704.06369#6
NormFace: L2 Hypersphere Embedding for Face Verification
Thanks to the recent developments of Convolutional Neural Networks, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differentiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve performance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%. Codes and models are released on https://github.com/happynear/NormFace
http://arxiv.org/pdf/1704.06369
Feng Wang, Xiang Xiang, Jian Cheng, Alan L. Yuille
cs.CV
camera-ready version
null
cs.CV
20170421
20170726
[ { "id": "1502.03167" }, { "id": "1611.08976" }, { "id": "1703.09507" }, { "id": "1511.02683" }, { "id": "1706.04264" }, { "id": "1607.06450" }, { "id": "1702.06890" } ]
1704.06440
6
First, let us calculate the policy π that maximizes η. We claim that η(π) is maximized by πB defined as mp (a) = F(a) exp(F(a)/7)/ Ea'~x [exp(F(a’)/7)] normalizing constant. (2) To derive this, consider the KL divergence between m and 78: Dxx [x || 18] = Eqxz [log m(a) — log 78 (a)] [x || 18] = Eqxz [log m(a) — log 78 (a)] (3) = Eqn [log m(a) — log 7(a) — 7(a)/7 + log Eaaz [exp(r(a)/7)]] (4) = Dx [rr || 7] — Eons [7(@)/T] + log Ea~z [exp(7(a)/7)] (5) # DKL − Rearranging and multiplying by τ , Ean [7(a)] — TDxt [7 || 7] = 7 log Egnz [exp(r(a)/7)] — TDK [7 | a] (6) − −
1704.06440#6
Equivalence Between Policy Gradients and Soft Q-Learning
Two of the leading approaches for model-free reinforcement learning are policy gradient methods and $Q$-learning methods. $Q$-learning methods can be effective and sample-efficient when they work, however, it is not well-understood why they work, since empirically, the $Q$-values they estimate are very inaccurate. A partial explanation may be that $Q$-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between $Q$-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that "soft" (entropy-regularized) $Q$-learning is exactly equivalent to a policy gradient method. We also point out a connection between $Q$-learning methods and natural policy gradient methods. Experimentally, we explore the entropy-regularized versions of $Q$-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a $Q$-learning method that closely matches the learning dynamics of A3C without using a target network or $\epsilon$-greedy exploration schedule.
http://arxiv.org/pdf/1704.06440
John Schulman, Xi Chen, Pieter Abbeel
cs.LG
null
null
cs.LG
20170421
20181014
[ { "id": "1602.01783" }, { "id": "1702.08892" }, { "id": "1702.08165" }, { "id": "1611.01626" }, { "id": "1511.06581" }, { "id": "1512.08562" }, { "id": "1506.02438" } ]
1704.06369
7
strategy (Section 4). Utilizing the ‘agent’ strategy, there is no need to sample pairs and triplets of samples nor to implement the hard mining algorithm. We also propose two tricks to improve performance for both static and video face verification. The first is to merge features ex- tracted from both original image and mirror image by summation, while previous works usually merge the features by concatenation[31, 36]. The second is to use histogram of face similarities between video pairs instead of the mean[23, 36] or max[39] similarity when making classification. Finally, by experiments, we show that normalization during training can promote the accuracies of two publicly available state- of-the-art models by 0.2 ∼ 0.4% on LFW[10] and about 0.6% on YTF[37].
1704.06369#7
NormFace: L2 Hypersphere Embedding for Face Verification
Thanks to the recent developments of Convolutional Neural Networks, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differentiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve performance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%. Codes and models are released on https://github.com/happynear/NormFace
http://arxiv.org/pdf/1704.06369
Feng Wang, Xiang Xiang, Jian Cheng, Alan L. Yuille
cs.CV
camera-ready version
null
cs.CV
20170421
20170726
[ { "id": "1502.03167" }, { "id": "1611.08976" }, { "id": "1703.09507" }, { "id": "1511.02683" }, { "id": "1706.04264" }, { "id": "1607.06450" }, { "id": "1702.06890" } ]
1704.06440
7
− − Clearly the left-hand side is maximized (with respect to π) when the KL term on the right-hand side is minimized (as the other term does not depend on π), and DKL The preceding calculation gives us the optimal policy when ¯r is known, but in the entropy-regularized bandit problem, it is initially unknown, and the agent learns about it by sampling. There are two approaches for solving the entropy-regularized bandit problem: 1. A direct, policy-based approach, where we incrementally update the agent’s policy π based on stochastic gradient ascent on η. 2. An indirect, value-based approach, where we learn an action-value function qθ that estimates and approximates ¯r, and we define π based on our current estimate of qθ. For the policy-based approach, we can obtain unbiased estimates the gradient of η. For a parameterized policy πθ, the gradient is given by θη(πθ) = Ea∼πθ,r [ θ log πθ(a)r τ θDKL [πθ π]] . (7) ∇ ∇ − ∇ We can obtain an unbiased gradient estimate using a single sample (a, r).
1704.06440#7
Equivalence Between Policy Gradients and Soft Q-Learning
Two of the leading approaches for model-free reinforcement learning are policy gradient methods and $Q$-learning methods. $Q$-learning methods can be effective and sample-efficient when they work, however, it is not well-understood why they work, since empirically, the $Q$-values they estimate are very inaccurate. A partial explanation may be that $Q$-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between $Q$-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that "soft" (entropy-regularized) $Q$-learning is exactly equivalent to a policy gradient method. We also point out a connection between $Q$-learning methods and natural policy gradient methods. Experimentally, we explore the entropy-regularized versions of $Q$-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a $Q$-learning method that closely matches the learning dynamics of A3C without using a target network or $\epsilon$-greedy exploration schedule.
http://arxiv.org/pdf/1704.06440
John Schulman, Xi Chen, Pieter Abbeel
cs.LG
null
null
cs.LG
20170421
20181014
[ { "id": "1602.01783" }, { "id": "1702.08892" }, { "id": "1702.08165" }, { "id": "1611.01626" }, { "id": "1511.06581" }, { "id": "1512.08562" }, { "id": "1506.02438" } ]
1704.06369
8
2 RELATED WORKS Normalization in Neural Network. Normalization is a common operation in modern neural network models. Local Response Nor- malization and Local Contrast Normalization are studied in the AlexNet model[12], even though these techniques are no longer common in modern models. Batch normalization[11] is widely used to accelerate the speed of neural network convergence by reducing the internal covariate shift of intermediate features. Weight normal- ization [27] was proposed to normalize the weights of convolution layers and inner-product layers, and also lead to faster convergence speed. Layer normalization [1] tried to solve the batch size depen- dent problem of batch normalization, and works well on Recurrent Neural Networks. Face Verification. Face verification is to decide whether two im- ages containing faces represent the same person or two different people, and thus is important for access control or re-identification tasks. Face verification using deep learning techniques achieved a series of breakthroughs in recent years [20, 23, 28, 33, 36]. There are mainly two types of methods according to their loss functions. One type uses metric learning loss functions, such as contrastive loss[4, 40] and triplet loss[23, 28, 34]. The other type uses
1704.06369#8
NormFace: L2 Hypersphere Embedding for Face Verification
Thanks to the recent developments of Convolutional Neural Networks, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differentiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve performance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%. Codes and models are released on https://github.com/happynear/NormFace
http://arxiv.org/pdf/1704.06369
Feng Wang, Xiang Xiang, Jian Cheng, Alan L. Yuille
cs.CV
camera-ready version
null
cs.CV
20170421
20170726
[ { "id": "1502.03167" }, { "id": "1611.08976" }, { "id": "1703.09507" }, { "id": "1511.02683" }, { "id": "1706.04264" }, { "id": "1607.06450" }, { "id": "1702.06890" } ]
1704.06440
8
∇ ∇ − ∇ We can obtain an unbiased gradient estimate using a single sample (a, r). In the indirect, value-based approach approach, it is natural to use a squared-error loss: Lg(8) = $Eannir [(go(a) ~ 7)°| (8) Taking the gradient of this loss, with respect to the parameters of qθ, we get θLπ(θ) = Ea∼π,r [ θqθ(a)(qθ(a) r)] (9) ∇ Soon, we will calculate the relationship between this loss gradient and the policy gradient from Equation (7). In the indirect, value-based approach, a natural choice for policy π is the one that would be optimal if ∇ − qθ = ¯r. Let’s denote this policy, called the Boltzmann policy, by πB qθ , where Tho (2) = 7(a) exp(qo(a)/7)/Ew x [exp(qo(a’)/7)] - (10) 2 It will be convenient to introduce a bit of notation for the normalizing factor; namely, we define the scalar vθ = τ log Ea∼π [exp(qθ(a))/τ ] (11)
1704.06440#8
Equivalence Between Policy Gradients and Soft Q-Learning
Two of the leading approaches for model-free reinforcement learning are policy gradient methods and $Q$-learning methods. $Q$-learning methods can be effective and sample-efficient when they work, however, it is not well-understood why they work, since empirically, the $Q$-values they estimate are very inaccurate. A partial explanation may be that $Q$-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between $Q$-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that "soft" (entropy-regularized) $Q$-learning is exactly equivalent to a policy gradient method. We also point out a connection between $Q$-learning methods and natural policy gradient methods. Experimentally, we explore the entropy-regularized versions of $Q$-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a $Q$-learning method that closely matches the learning dynamics of A3C without using a target network or $\epsilon$-greedy exploration schedule.
http://arxiv.org/pdf/1704.06440
John Schulman, Xi Chen, Pieter Abbeel
cs.LG
null
null
cs.LG
20170421
20181014
[ { "id": "1602.01783" }, { "id": "1702.08892" }, { "id": "1702.08165" }, { "id": "1611.01626" }, { "id": "1511.06581" }, { "id": "1512.08562" }, { "id": "1506.02438" } ]
1704.06369
9
loss functions. One type uses metric learning loss functions, such as contrastive loss[4, 40] and triplet loss[23, 28, 34]. The other type uses soft- max loss and treats the problem as a classification task, but also constrains the intra-class variance to get better generalization for comparing face features [16, 36]. Some works also combine both kinds of loss functions[40, 41]. Metric Learning. Metric learning[4, 25, 34] tries to learn semantic distance measures and embeddings such that similar samples are nearer and different samples are further apart from each other on a manifold. With the help of neural networks’ enormous ability of representation learning, deep metric learning[3, 19] can do even better than the traditional methods. Recently, more complicated loss functions were proposed to get better local embedding structures[8, 22, 30]. Recent Works on Normalization. Recently, cosine similarity [17] was used instead of the inner-product for training a CNN for person recognition, which is quite similar with face verification. The Cosine Loss proposed in [17] is quite similar with the one described in Section 3.3, normalizing both the features and weights. L2-softmax[24] shares
1704.06369#9
NormFace: L2 Hypersphere Embedding for Face Verification
Thanks to the recent developments of Convolutional Neural Networks, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differentiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve performance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%. Codes and models are released on https://github.com/happynear/NormFace
http://arxiv.org/pdf/1704.06369
Feng Wang, Xiang Xiang, Jian Cheng, Alan L. Yuille
cs.CV
camera-ready version
null
cs.CV
20170421
20170726
[ { "id": "1502.03167" }, { "id": "1611.08976" }, { "id": "1703.09507" }, { "id": "1511.02683" }, { "id": "1706.04264" }, { "id": "1607.06450" }, { "id": "1702.06890" } ]
1704.06440
9
vθ = τ log Ea∼π [exp(qθ(a))/τ ] (11) Then the Boltzmann policy can be written as # πB qθ (a) = π(a) exp((qθ(a) vθ)/τ ). (12) − Note that the term 7 log E,~ [exp(7(a)/7)], appeared earlier in Equation (6)). Repeating the calculation from Equation (2) through Equation (6), but with gg instead of 7, v9 = Eqank, (go(a)] — TD [x8 || =]. (13) − qθ ), plugging in qθ for ¯r. Hence, vθ is an estimate of η(πB Now we shall show the connection between the gradient of the squared-error loss (Equation (9)) and the policy gradient (Equation (7)). Rearranging Equation (12), we can write qθ in terms of vθ and the Boltzmann policy πB qθ : 7B qo(a) = vo + roe ( +S) (14) Let’s substitute this expression for qθ into the squared-error loss gradient (Equation (9)). # Vol (qo) = Eaxn,r r)]
1704.06440#9
Equivalence Between Policy Gradients and Soft Q-Learning
Two of the leading approaches for model-free reinforcement learning are policy gradient methods and $Q$-learning methods. $Q$-learning methods can be effective and sample-efficient when they work, however, it is not well-understood why they work, since empirically, the $Q$-values they estimate are very inaccurate. A partial explanation may be that $Q$-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between $Q$-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that "soft" (entropy-regularized) $Q$-learning is exactly equivalent to a policy gradient method. We also point out a connection between $Q$-learning methods and natural policy gradient methods. Experimentally, we explore the entropy-regularized versions of $Q$-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a $Q$-learning method that closely matches the learning dynamics of A3C without using a target network or $\epsilon$-greedy exploration schedule.
http://arxiv.org/pdf/1704.06440
John Schulman, Xi Chen, Pieter Abbeel
cs.LG
null
null
cs.LG
20170421
20181014
[ { "id": "1602.01783" }, { "id": "1702.08892" }, { "id": "1702.08165" }, { "id": "1611.01626" }, { "id": "1511.06581" }, { "id": "1512.08562" }, { "id": "1506.02438" } ]
1704.06440
10
Let’s substitute this expression for qθ into the squared-error loss gradient (Equation (9)). # Vol (qo) = Eaxn,r r)] # [Voqo(a)(qo(a) my (@) log aa) # ∇ = Ea∼π,r # ∇ vθ + τ log - my (@) mh (a) . Eunnr | Vo| ve + 7 log aa) vg +7 log aa) —r (16) _—E B : Tap (2) , ; 749 (2) = Eaan.r |TVo log 7, (a) { ve + 7 log stay J — 7) A Vove| vo + T log aay r (17) Note that we have not yet decided on a sampling distribution π. Henceforth, we’ll assume actions were sampled by π = πB (“ ay (@ ) tr) VoDxx [x8 || 7] = Vo [vo nr (a yon (“ ay (@ ) (18) # (a yon ∇ = [ro Vor (a)(1oe( tr) taf & (=r) (19) moving gradient inside and using identity da ∇ θπB qθ (a)=0
1704.06440#10
Equivalence Between Policy Gradients and Soft Q-Learning
Two of the leading approaches for model-free reinforcement learning are policy gradient methods and $Q$-learning methods. $Q$-learning methods can be effective and sample-efficient when they work, however, it is not well-understood why they work, since empirically, the $Q$-values they estimate are very inaccurate. A partial explanation may be that $Q$-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between $Q$-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that "soft" (entropy-regularized) $Q$-learning is exactly equivalent to a policy gradient method. We also point out a connection between $Q$-learning methods and natural policy gradient methods. Experimentally, we explore the entropy-regularized versions of $Q$-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a $Q$-learning method that closely matches the learning dynamics of A3C without using a target network or $\epsilon$-greedy exploration schedule.
http://arxiv.org/pdf/1704.06440
John Schulman, Xi Chen, Pieter Abbeel
cs.LG
null
null
cs.LG
20170421
20181014
[ { "id": "1602.01783" }, { "id": "1702.08892" }, { "id": "1702.08165" }, { "id": "1611.01626" }, { "id": "1511.06581" }, { "id": "1512.08562" }, { "id": "1506.02438" } ]
1704.06369
11
Figure 2: Left: The optimized 2-dimensional feature distribu- tion using softmax loss on MNIST[14] dataset. Note that the Euclidean distance between f1 and f2 is much smaller than the distance between f2 and f3, even though f2 and f3 are from the same class. Right: The softmax probability for class 0 on the 2-dimension plane. Best viewed in color. Margin Softmax[16] by normalizing the weights of the last inner- product layer only. Von Mises-Fisher Mixture Model(vMFMM)[21] interprets the hypersphere embedding as a mixture of von Mises- Fisher distributions. To sum up, the Cosine Loss[17], vMFMM[21] and our proposed loss functions optimize both features and weights, while the L2-softmax[24] normalizes the features only and the SphereFace[35] normalizes the weights only. 3 L2 NORMALIZATION LAYER In this section, we answer the question why we should normalize the features when the loss function is softmax loss and why the network does not converge if we directly put a softmax loss on the normalized features.
1704.06369#11
NormFace: L2 Hypersphere Embedding for Face Verification
Thanks to the recent developments of Convolutional Neural Networks, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differentiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve performance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%. Codes and models are released on https://github.com/happynear/NormFace
http://arxiv.org/pdf/1704.06369
Feng Wang, Xiang Xiang, Jian Cheng, Alan L. Yuille
cs.CV
camera-ready version
null
cs.CV
20170421
20170726
[ { "id": "1502.03167" }, { "id": "1611.08976" }, { "id": "1703.09507" }, { "id": "1511.02683" }, { "id": "1706.04264" }, { "id": "1607.06450" }, { "id": "1702.06890" } ]
1704.06440
11
moving gradient inside and using identity da ∇ θπB qθ (a)=0 - | da 78 (a)Vo log 78 (a) Woe( anh ) (20) # (a) Woe( B (a) Woe( rae B = Eanns, [v0 log 8, (a) Woe( rae ‘)] (21) Continuing from Equation (17) but setting π = πB qθ , ∇ (q)| = Eaank wr [tVo log coe (a)(ve — 1) + 7?Vo Dei [rs I 7] nan, + VoE ann’ wr [ve (ve + TDxx [7 B |] 7) — r)| (22) =-T Voz awn r [r — TDi [x8 | 7] | + Tiles [4 (vo —(r— TDxu Ir || #))7] | pan8 (23) —— 90 policy gradient value error gradient # VoL (q)|
1704.06440#11
Equivalence Between Policy Gradients and Soft Q-Learning
Two of the leading approaches for model-free reinforcement learning are policy gradient methods and $Q$-learning methods. $Q$-learning methods can be effective and sample-efficient when they work, however, it is not well-understood why they work, since empirically, the $Q$-values they estimate are very inaccurate. A partial explanation may be that $Q$-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between $Q$-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that "soft" (entropy-regularized) $Q$-learning is exactly equivalent to a policy gradient method. We also point out a connection between $Q$-learning methods and natural policy gradient methods. Experimentally, we explore the entropy-regularized versions of $Q$-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a $Q$-learning method that closely matches the learning dynamics of A3C without using a target network or $\epsilon$-greedy exploration schedule.
http://arxiv.org/pdf/1704.06440
John Schulman, Xi Chen, Pieter Abbeel
cs.LG
null
null
cs.LG
20170421
20181014
[ { "id": "1602.01783" }, { "id": "1702.08892" }, { "id": "1702.08165" }, { "id": "1611.01626" }, { "id": "1511.06581" }, { "id": "1512.08562" }, { "id": "1506.02438" } ]
1704.06369
12
3.1 Necessity of Normalization In order to give an intuitive feeling about the softmax loss, we did a toy experiment of training a deeper LeNet[13] model on the MNIST dataset[14]. We reduced the number of the feature dimension to 2 and plot 10,000 2-dimensional features from the training set on a plane in Figure 2. From the figure, we find that f, can be much closer to f; than to f3 if we use Euclidean distance as the metric. Hence directly using the features for comparison may lead to bad performance. At the same time, we find that the angles between feature vectors seem to be a good metric compared with Euclidean distance or inner-product operations. Actually, most previous work takes the cosine of the angle between feature vectors as the similarity [31, 36, 38], even though they all use softmax loss to train the network. Since the most common similarity metric for softmax loss is the inner-product with unnormalized features, there is a gap between the metrics used in the training and testing phases. The reason why the softmax loss tends to create a ‘radial’ feature distribution (Figure 2) is that the softmax loss actually acts as the soft version of max operator. Scaling the feature
1704.06369#12
NormFace: L2 Hypersphere Embedding for Face Verification
Thanks to the recent developments of Convolutional Neural Networks, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differentiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve performance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%. Codes and models are released on https://github.com/happynear/NormFace
http://arxiv.org/pdf/1704.06369
Feng Wang, Xiang Xiang, Jian Cheng, Alan L. Yuille
cs.CV
camera-ready version
null
cs.CV
20170421
20170726
[ { "id": "1502.03167" }, { "id": "1611.08976" }, { "id": "1703.09507" }, { "id": "1511.02683" }, { "id": "1706.04264" }, { "id": "1607.06450" }, { "id": "1702.06890" } ]
1704.06440
12
# VoL (q)| Hence, the gradient of the squared error for our action-value function can be broken into two parts: the first part is the policy gradient of the Boltzmann policy corresponding to gg, the second part arises mt a Ta error objective, where we are fitting vg to the entropy-augmented expected reward 7(a) — 7Dxu [7 B || 7] T Soon we will derive an equivalent interpretation of Q-function regression in the MDP setting, where we are approximating the state-value function Q7'7. However, we first need to introduce an entropy-regularized version of the reinforcement learning problem. − 3 (15) # 3 Entropy-Regularized Reinforcement Learning
1704.06440#12
Equivalence Between Policy Gradients and Soft Q-Learning
Two of the leading approaches for model-free reinforcement learning are policy gradient methods and $Q$-learning methods. $Q$-learning methods can be effective and sample-efficient when they work, however, it is not well-understood why they work, since empirically, the $Q$-values they estimate are very inaccurate. A partial explanation may be that $Q$-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between $Q$-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that "soft" (entropy-regularized) $Q$-learning is exactly equivalent to a policy gradient method. We also point out a connection between $Q$-learning methods and natural policy gradient methods. Experimentally, we explore the entropy-regularized versions of $Q$-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a $Q$-learning method that closely matches the learning dynamics of A3C without using a target network or $\epsilon$-greedy exploration schedule.
http://arxiv.org/pdf/1704.06440
John Schulman, Xi Chen, Pieter Abbeel
cs.LG
null
null
cs.LG
20170421
20181014
[ { "id": "1602.01783" }, { "id": "1702.08892" }, { "id": "1702.08165" }, { "id": "1611.01626" }, { "id": "1511.06581" }, { "id": "1512.08562" }, { "id": "1506.02438" } ]
1704.06440
13
− 3 (15) # 3 Entropy-Regularized Reinforcement Learning We shall consider an entropy-regularized version of the reinforc prior work (Ziebart [2010], Fox et al. [2015], Haarnoja et al. [20 us define the entropy-augmented return to be ran V(r, —7 KL ement learning problem, following various 7], Nachum et al. [2017]). Specifically, let 1) where r; is the reward, y € [0,1] is the discount factor, 7 is a scalar temperature coefficient, and KL; is the Kullback-Leibler divergence between the current policy 7 and a reference policy 7 at timestep t: KL; = Dxx [7(-| s¢) || 7(-| sz)]. We will sometimes use the notation KL(s) = Dx [7 || 7] (s) = Dxx [x(-|s) || 7(- entropy bonus (up to a constant), one can define 7 to be the uni will generalize some of the concepts from reinforcement learning entropy-augmented discounted return. s)|. To emulate the effect of a standard form distribution. The subsequent sections o the setting where we are maximizing the # 3.1 Value Functions
1704.06440#13
Equivalence Between Policy Gradients and Soft Q-Learning
Two of the leading approaches for model-free reinforcement learning are policy gradient methods and $Q$-learning methods. $Q$-learning methods can be effective and sample-efficient when they work, however, it is not well-understood why they work, since empirically, the $Q$-values they estimate are very inaccurate. A partial explanation may be that $Q$-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between $Q$-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that "soft" (entropy-regularized) $Q$-learning is exactly equivalent to a policy gradient method. We also point out a connection between $Q$-learning methods and natural policy gradient methods. Experimentally, we explore the entropy-regularized versions of $Q$-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a $Q$-learning method that closely matches the learning dynamics of A3C without using a target network or $\epsilon$-greedy exploration schedule.
http://arxiv.org/pdf/1704.06440
John Schulman, Xi Chen, Pieter Abbeel
cs.LG
null
null
cs.LG
20170421
20181014
[ { "id": "1602.01783" }, { "id": "1702.08892" }, { "id": "1702.08165" }, { "id": "1611.01626" }, { "id": "1511.06581" }, { "id": "1512.08562" }, { "id": "1506.02438" } ]
1704.06440
14
# 3.1 Value Functions We are obliged to alter our definitions of value functions to include the new KL penalty terms. We shall define the state-value function as the expected return: V,(s) =E oo Ss o' (re — T KL) t=0 80 (24) and we shall define the Q-function as co To+ Svi(re —7TkKll;) t=1 Qx(s,a) =E 5 (25) 80 Note that this Q-function does not include the first KL penalty term, which does not depend on the action a0. This definition makes some later expressions simpler, and it leads to the following relationship between Qπ and Vπ: Vπ(s) = Ea∼π [Qπ(s, a)] τ KL(s), (26) − which follows from matching terms in the sums in Equations (24) and (25). # 3.2 Boltzmann Policy
1704.06440#14
Equivalence Between Policy Gradients and Soft Q-Learning
Two of the leading approaches for model-free reinforcement learning are policy gradient methods and $Q$-learning methods. $Q$-learning methods can be effective and sample-efficient when they work, however, it is not well-understood why they work, since empirically, the $Q$-values they estimate are very inaccurate. A partial explanation may be that $Q$-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between $Q$-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that "soft" (entropy-regularized) $Q$-learning is exactly equivalent to a policy gradient method. We also point out a connection between $Q$-learning methods and natural policy gradient methods. Experimentally, we explore the entropy-regularized versions of $Q$-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a $Q$-learning method that closely matches the learning dynamics of A3C without using a target network or $\epsilon$-greedy exploration schedule.
http://arxiv.org/pdf/1704.06440
John Schulman, Xi Chen, Pieter Abbeel
cs.LG
null
null
cs.LG
20170421
20181014
[ { "id": "1602.01783" }, { "id": "1702.08892" }, { "id": "1702.08165" }, { "id": "1611.01626" }, { "id": "1511.06581" }, { "id": "1512.08562" }, { "id": "1506.02438" } ]
1704.06369
15
Figure 3: Two selected scatter diagrams when bias term is added after inner-product operation. Please note that there are one or two clusters that are located near the zero point. If we normalize the features of the center clusters, they would spread everywhere on the unit circle, which would cause misclassification. Best viewed in color. label in range [1, n], W and b are the weight matrix and the bias vector of the last inner-product layer before the softmax loss, Wj is the j-th column of W , which is corresponding to the j-th class. In the testing phase, we classify a sample by Class(f) = i = arg max i (W T i f + bi ). (2) In this case, we can infer that (Wi f +bi ) − (Wj f +bj ) ≥ 0, ∀j ∈ [1, n]. Using this inequality, we obtain the following proposition. Proposition 1. For the softmax loss with no-bias inner-product similarity as its metric, let Pi (f) = Te ei St denote the probability dye similarity as its metric, let P(f£) = of x being classified as class i. For any given scale s > 1, if i = arg maxj (W T
1704.06369#15
NormFace: L2 Hypersphere Embedding for Face Verification
Thanks to the recent developments of Convolutional Neural Networks, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differentiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve performance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%. Codes and models are released on https://github.com/happynear/NormFace
http://arxiv.org/pdf/1704.06369
Feng Wang, Xiang Xiang, Jian Cheng, Alan L. Yuille
cs.CV
camera-ready version
null
cs.CV
20170421
20170726
[ { "id": "1502.03167" }, { "id": "1611.08976" }, { "id": "1703.09507" }, { "id": "1511.02683" }, { "id": "1706.04264" }, { "id": "1607.06450" }, { "id": "1702.06890" } ]
1704.06440
15
− which follows from matching terms in the sums in Equations (24) and (25). # 3.2 Boltzmann Policy Q](s) = arg maxa Q(s, a). With In standard reinforcement learning, the “greedy policy” for Q is defined as [ entropy regularization, we need to alter our notion of a greedy policy, as the optimal policy is stochastic. Since Qπ omits the first entropy term, it is natural to define the following stochastic policy, which is called the Boltzmann policy, and is analogous to the greedy policy: TH: |s)= arg max{Ea~ {Q(s,a)] — TDxx [7 || 7] (s)} (27) # π = π(a = 7(a| s) exp(Q(s, @)/7)/Eu'xz [exp(Q(s,a’)/7)] . (28) normalizing constant where the second equation is analogous to Equation (2) from the bandit setting. Also analogously to the bandit setting, it is natural to define VQ (a function of Q) as Vo(s) = 7 log Eurax [ex(Q(s, a')/r)] (29) so that
1704.06440#15
Equivalence Between Policy Gradients and Soft Q-Learning
Two of the leading approaches for model-free reinforcement learning are policy gradient methods and $Q$-learning methods. $Q$-learning methods can be effective and sample-efficient when they work, however, it is not well-understood why they work, since empirically, the $Q$-values they estimate are very inaccurate. A partial explanation may be that $Q$-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between $Q$-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that "soft" (entropy-regularized) $Q$-learning is exactly equivalent to a policy gradient method. We also point out a connection between $Q$-learning methods and natural policy gradient methods. Experimentally, we explore the entropy-regularized versions of $Q$-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a $Q$-learning method that closely matches the learning dynamics of A3C without using a target network or $\epsilon$-greedy exploration schedule.
http://arxiv.org/pdf/1704.06440
John Schulman, Xi Chen, Pieter Abbeel
cs.LG
null
null
cs.LG
20170421
20181014
[ { "id": "1602.01783" }, { "id": "1702.08892" }, { "id": "1702.08165" }, { "id": "1611.01626" }, { "id": "1511.06581" }, { "id": "1512.08562" }, { "id": "1506.02438" } ]
1704.06369
16
of x being classified as class i. For any given scale s > 1, if i = arg maxj (W T The proof is given in Appendix 8.1. This proposition implies that softmax loss always encourages well-separated features to have bigger magnitudes. This is the reason why the feature distribution of softmax is ‘radial’. However, we may not need this property as shown in Figure2. By normalization, we can eliminate its effect. Thus, we usually use the cosine of two feature vectors to measure the similarity of two samples. However, Proposition 1 does not hold if a bias term is added after the inner-product operation. In fact, the weight vector of the two classes could be the same and the model still could make a decision via the biases. We found this kind of case during the MNIST experiments and the scatters are shown in Figure 3. It can be discovered from the figure that the points of some classes all locate around the zero point, and after normalization the points from each of these classes may be spread out on the unit circle, overlapping with other classes. In these cases, feature normalization may destroy the discrimination ability of the specific classes. To avoid this kind of risk, we do not add the bias term before the softmax loss in this work, even though it is commonly used for classification tasks. # 3.2 Layer Definition # «fDi
1704.06369#16
NormFace: L2 Hypersphere Embedding for Face Verification
Thanks to the recent developments of Convolutional Neural Networks, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differentiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve performance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%. Codes and models are released on https://github.com/happynear/NormFace
http://arxiv.org/pdf/1704.06369
Feng Wang, Xiang Xiang, Jian Cheng, Alan L. Yuille
cs.CV
camera-ready version
null
cs.CV
20170421
20170726
[ { "id": "1502.03167" }, { "id": "1611.08976" }, { "id": "1703.09507" }, { "id": "1511.02683" }, { "id": "1706.04264" }, { "id": "1607.06450" }, { "id": "1702.06890" } ]
1704.06440
16
Vo(s) = 7 log Eurax [ex(Q(s, a')/r)] (29) so that πB Q(a | s) = π(a | s) exp((Q(s, a) − VQ(s))/τ ) Under this definition, it also holds that (31) Va(s) = Eanat(s) (Q(s,@)] — rDxx [76 || 7] (s) − 4 (30) in analogy with Equation (13). Hence, VQ(s) can be interpreted as an estimate of the expected entropy- augmented return, under the Boltzmann policy πB Q. Another way to interpret the Boltzmann policy is as the exponentiated advantage function. Defining the advantage function as AQ(s, a) = Q(s, a) VQ(s), Equation (30) implies that πB Q(a | s) π(a | s) = exp(AQ(s, a)/τ ). − # 3.3 Fixed-Policy Backup Operators
1704.06440#16
Equivalence Between Policy Gradients and Soft Q-Learning
Two of the leading approaches for model-free reinforcement learning are policy gradient methods and $Q$-learning methods. $Q$-learning methods can be effective and sample-efficient when they work, however, it is not well-understood why they work, since empirically, the $Q$-values they estimate are very inaccurate. A partial explanation may be that $Q$-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between $Q$-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that "soft" (entropy-regularized) $Q$-learning is exactly equivalent to a policy gradient method. We also point out a connection between $Q$-learning methods and natural policy gradient methods. Experimentally, we explore the entropy-regularized versions of $Q$-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a $Q$-learning method that closely matches the learning dynamics of A3C without using a target network or $\epsilon$-greedy exploration schedule.
http://arxiv.org/pdf/1704.06440
John Schulman, Xi Chen, Pieter Abbeel
cs.LG
null
null
cs.LG
20170421
20181014
[ { "id": "1602.01783" }, { "id": "1702.08892" }, { "id": "1702.08165" }, { "id": "1611.01626" }, { "id": "1511.06581" }, { "id": "1512.08562" }, { "id": "1506.02438" } ]
1704.06369
17
# 3.2 Layer Definition # «fDi In this paper, we define ||x|l2 = «fDi x} +e, where ¢ is a small positive value to prevent dividing zero. For an input vector x € R”, loss within decrease the when However, after normalization, the network fails to converge. The loss only decreases a little and then converges to a very big value within a few thousands of iterations. After that the loss does not decrease no matter how many iterations we train and how small the learning rate is.
1704.06369#17
NormFace: L2 Hypersphere Embedding for Face Verification
Thanks to the recent developments of Convolutional Neural Networks, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differentiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve performance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%. Codes and models are released on https://github.com/happynear/NormFace
http://arxiv.org/pdf/1704.06369
Feng Wang, Xiang Xiang, Jian Cheng, Alan L. Yuille
cs.CV
camera-ready version
null
cs.CV
20170421
20170726
[ { "id": "1502.03167" }, { "id": "1611.08976" }, { "id": "1703.09507" }, { "id": "1511.02683" }, { "id": "1706.04264" }, { "id": "1607.06450" }, { "id": "1702.06890" } ]
1704.06440
17
− # 3.3 Fixed-Policy Backup Operators The π operators (for Q and V ) in standard reinforcement learning correspond to computing the expected return with a one-step lookahead: they take the expectation over one step of dynamics, and then fall back on the value function at the next timestep. We can easily generalize these operators to the entropy-regularized setting. We define ~m,(r,8!)~P(r,s! | s,a) [7 — TKL(s) + V(s')] (32) [TrQ](s, a) = E(r,s’)~P(r,8! | 8,0) [r + y(Ea'wa [Q(s’, a’)] — 7 KL(s’))] . (33) Repeatedly applying the 7, operator (7,"V = Tx(Tx(.-.. Tz(V)))) corresponds to computing the expected e~Y—— n times return with a multi-step lookahead. That is, repeatedly expanding the definition of 7,, we obtain # T [n-1 TEV \(s) =E | oye: — 7 KL) +9"V (Sn) t=0 a= | (34) # [ T
1704.06440#17
Equivalence Between Policy Gradients and Soft Q-Learning
Two of the leading approaches for model-free reinforcement learning are policy gradient methods and $Q$-learning methods. $Q$-learning methods can be effective and sample-efficient when they work, however, it is not well-understood why they work, since empirically, the $Q$-values they estimate are very inaccurate. A partial explanation may be that $Q$-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between $Q$-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that "soft" (entropy-regularized) $Q$-learning is exactly equivalent to a policy gradient method. We also point out a connection between $Q$-learning methods and natural policy gradient methods. Experimentally, we explore the entropy-regularized versions of $Q$-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a $Q$-learning method that closely matches the learning dynamics of A3C without using a target network or $\epsilon$-greedy exploration schedule.
http://arxiv.org/pdf/1704.06440
John Schulman, Xi Chen, Pieter Abbeel
cs.LG
null
null
cs.LG
20170421
20181014
[ { "id": "1602.01783" }, { "id": "1702.08892" }, { "id": "1702.08165" }, { "id": "1611.01626" }, { "id": "1511.06581" }, { "id": "1512.08562" }, { "id": "1506.02438" } ]
1704.06369
18
This is mainly because the range of d(f, Wi) is only [−1, 1] after normalization, while it is usually between (−20, 20) and (−80, 80) when we use an inner-product layer and softmax loss. This low range problem may prevent the probability Pyi (f; W) = e WT yi WT j f j e where yi is f’s label, from getting close to 1 even when the samples are well-separated. In the extreme case, e 1+(n−1)e −1 is very small (0.45 when n = 10; 0.007 when n = 1000), even though in this condition the samples of all other classes are on the other side of the unit hypersphere. Since the gradient of softmax loss w.r.t. the ground truth label is 1 − Pyi , the model will always try to give large gradients to the well separated samples, while the harder samples may not get sufficient gradients. ,
1704.06369#18
NormFace: L2 Hypersphere Embedding for Face Verification
Thanks to the recent developments of Convolutional Neural Networks, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differentiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve performance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%. Codes and models are released on https://github.com/happynear/NormFace
http://arxiv.org/pdf/1704.06369
Feng Wang, Xiang Xiang, Jian Cheng, Alan L. Yuille
cs.CV
camera-ready version
null
cs.CV
20170421
20170726
[ { "id": "1502.03167" }, { "id": "1611.08976" }, { "id": "1703.09507" }, { "id": "1511.02683" }, { "id": "1706.04264" }, { "id": "1607.06450" }, { "id": "1702.06890" } ]
1704.06440
18
# T [n-1 TEV \(s) =E | oye: — 7 KL) +9"V (Sn) t=0 a= | (34) # [ T n-1 YE re = FKL) + Y"(Q(Sn an) — TKLy) t=0 [T7QI(s,a) — rKL(s) =E 89 = 8,49 = | . (35) As a sanity check, note that in both equations, the left-hand side and right-hand side correspond to estimates of the total discounted return ran V(r, — 7 KL). − The right-hand side of these backup formulas can be rewritten using “Bellman error” terms δt. To rewrite the state-value (V ) backup, define δt = (rt τ KLt) + γV (st+1) V (st) (36) − − Then we have n=1 [Tr V\(s) =E Ss Vb +7"V (Sn) | 80 = | : (37) t=0 # 3.4 Boltzmann Backups We can define another set of backup operators corresponding to the Boltzmann policy, π(a | We define the following Boltzmann backup operator: s) ∝ π(a |
1704.06440#18
Equivalence Between Policy Gradients and Soft Q-Learning
Two of the leading approaches for model-free reinforcement learning are policy gradient methods and $Q$-learning methods. $Q$-learning methods can be effective and sample-efficient when they work, however, it is not well-understood why they work, since empirically, the $Q$-values they estimate are very inaccurate. A partial explanation may be that $Q$-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between $Q$-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that "soft" (entropy-regularized) $Q$-learning is exactly equivalent to a policy gradient method. We also point out a connection between $Q$-learning methods and natural policy gradient methods. Experimentally, we explore the entropy-regularized versions of $Q$-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a $Q$-learning method that closely matches the learning dynamics of A3C without using a target network or $\epsilon$-greedy exploration schedule.
http://arxiv.org/pdf/1704.06440
John Schulman, Xi Chen, Pieter Abbeel
cs.LG
null
null
cs.LG
20170421
20181014
[ { "id": "1602.01783" }, { "id": "1702.08892" }, { "id": "1702.08165" }, { "id": "1611.01626" }, { "id": "1511.06581" }, { "id": "1512.08562" }, { "id": "1506.02438" } ]
1704.06369
19
, Figure 4: Left: The normalization operation and i its gradient in 2-dimensional space. Please note that Ix+a9£ oe L Whi is always bigger than ||x|| for all a > 0 because of the Pythagoras the- orem. Right: An example of the gradients w.r.t. the weight vector. All the gradients are in the tangent space of the unit sphere (denoted as the blue plane). The red, yellow and green points are normalized features from 3 different classes. The blue point is the normalized weight corresponding to the red class. Here we assume that the model tries to make features get close to their corresponding classes and away from other classes. Even though we illustrate the gradients applied on the normalized weight only, please note that opposite gra- dients are also applied on the normalized features (red, yel- low, green points). Finally, all the gradients are accumulated together to decide which direction the weight should be up- dated. Best viewed in color, zoomed in. To better understand this problem, we give a bound to clarify how small the softmax loss can be in the best case.
1704.06369#19
NormFace: L2 Hypersphere Embedding for Face Verification
Thanks to the recent developments of Convolutional Neural Networks, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differentiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve performance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%. Codes and models are released on https://github.com/happynear/NormFace
http://arxiv.org/pdf/1704.06369
Feng Wang, Xiang Xiang, Jian Cheng, Alan L. Yuille
cs.CV
camera-ready version
null
cs.CV
20170421
20170726
[ { "id": "1502.03167" }, { "id": "1611.08976" }, { "id": "1703.09507" }, { "id": "1511.02683" }, { "id": "1706.04264" }, { "id": "1607.06450" }, { "id": "1702.06890" } ]
1704.06440
19
# s) exp(Q(s, a)/τ ). [ [TQ](s, @) = Eq,s)~P(r,s' |s,a) {7 + YEa~ge [Q(s,a)] — TDxx [GQ || 7] (s’) (38) (*) = E(rs')~P(r,s! | s,a) [7 +77 log Earwz [exp(Q(s’, a’)/7)] (39) (+*) where the simplification from ( ) to ( ∗ setting (Equations (11) and (13)). ∗∗ ) follows from the same calculation that we performed in the bandit n π for Q-functions also simplifies in the case that we are executing the Boltzmann Q, and then using Equation (31) The n-step operator T n π Q (Equation (35)) and setting π = πB policy. Starting with the equation for # T 5 to rewrite the expected Q-function terms in terms of VQ, we obtain
1704.06440#19
Equivalence Between Policy Gradients and Soft Q-Learning
Two of the leading approaches for model-free reinforcement learning are policy gradient methods and $Q$-learning methods. $Q$-learning methods can be effective and sample-efficient when they work, however, it is not well-understood why they work, since empirically, the $Q$-values they estimate are very inaccurate. A partial explanation may be that $Q$-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between $Q$-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that "soft" (entropy-regularized) $Q$-learning is exactly equivalent to a policy gradient method. We also point out a connection between $Q$-learning methods and natural policy gradient methods. Experimentally, we explore the entropy-regularized versions of $Q$-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a $Q$-learning method that closely matches the learning dynamics of A3C without using a target network or $\epsilon$-greedy exploration schedule.
http://arxiv.org/pdf/1704.06440
John Schulman, Xi Chen, Pieter Abbeel
cs.LG
null
null
cs.LG
20170421
20181014
[ { "id": "1602.01783" }, { "id": "1702.08892" }, { "id": "1702.08165" }, { "id": "1611.01626" }, { "id": "1511.06581" }, { "id": "1512.08562" }, { "id": "1506.02438" } ]
1704.06369
20
To better understand this problem, we give a bound to clarify how small the softmax loss can be in the best case. Proposition 2. (Softmax Loss Bound After Normalization) Assume that every class has the same number of samples, and all the samples are well-separated, i.e. each sample’s feature is exactly same with its corresponding class’s weight. If we normalize both the features and every column of the weights to have a norm of €, the softmax loss will have a lower bound, log {1 + (n— 1) em), where n is the class number. an L2 normalization layer outputs the normalized vector, a (3) IIxll2 [S,x2 +e The proof is given in Appendix 8.2. Even though reading the proof need patience, we still encourage readers to read it because you may get better understanding about the hypersphere manifold from it. Here x can be either the feature vector f or one column of the weight matrix Wi . In backward propagation, the gradient w.r.t. x can be obtained by the chain-rule,
1704.06369#20
NormFace: L2 Hypersphere Embedding for Face Verification
Thanks to the recent developments of Convolutional Neural Networks, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differentiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve performance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%. Codes and models are released on https://github.com/happynear/NormFace
http://arxiv.org/pdf/1704.06369
Feng Wang, Xiang Xiang, Jian Cheng, Alan L. Yuille
cs.CV
camera-ready version
null
cs.CV
20170421
20170726
[ { "id": "1502.03167" }, { "id": "1611.08976" }, { "id": "1703.09507" }, { "id": "1511.02683" }, { "id": "1706.04264" }, { "id": "1607.06450" }, { "id": "1702.06890" } ]
1704.06440
20
# T 5 to rewrite the expected Q-function terms in terms of VQ, we obtain 1 V(r — T KL) + Y"(Q(Sn; an) — TKLn) n [(Tag)"Q\(s, a) — 7 KL(s) = E 50 = $,a9 = | (40) IL HO 3 | =E a (re — TKLt) + "VQ (Sn) 80 = 8,49 = | . (41) ° # [( TπB From now on, let’s denote this n-step backup operator by J,,6,,. (Note T,8 0 #T"Q, even though Tr8 1 = TQ, because Tr8 depends on Q.) # TπB # T # Q One can similarly define the TD(λ) version of this backup operator [ TπB Q,λQ] = (1 − λ)(1 + λ TπB Q + (λ TπB Q )2 + . . . ) TπB Q Q. (42) One can straightforwardly verify by comparing terms that it satisfies
1704.06440#20
Equivalence Between Policy Gradients and Soft Q-Learning
Two of the leading approaches for model-free reinforcement learning are policy gradient methods and $Q$-learning methods. $Q$-learning methods can be effective and sample-efficient when they work, however, it is not well-understood why they work, since empirically, the $Q$-values they estimate are very inaccurate. A partial explanation may be that $Q$-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between $Q$-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that "soft" (entropy-regularized) $Q$-learning is exactly equivalent to a policy gradient method. We also point out a connection between $Q$-learning methods and natural policy gradient methods. Experimentally, we explore the entropy-regularized versions of $Q$-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a $Q$-learning method that closely matches the learning dynamics of A3C without using a target network or $\epsilon$-greedy exploration schedule.
http://arxiv.org/pdf/1704.06440
John Schulman, Xi Chen, Pieter Abbeel
cs.LG
null
null
cs.LG
20170421
20181014
[ { "id": "1602.01783" }, { "id": "1702.08892" }, { "id": "1702.08165" }, { "id": "1611.01626" }, { "id": "1511.06581" }, { "id": "1512.08562" }, { "id": "1506.02438" } ]
1704.06369
21
Here x can be either the feature vector f or one column of the weight matrix Wi . In backward propagation, the gradient w.r.t. x can be obtained by the chain-rule, This bound implies that if we just normalize the features and weights to 1, the softmax loss will be trapped at a very high value on training set, even if no regularization is applied. For a real example, if we train the model on the CASIA-Webface dataset (n = 10575), the loss will decrease from about 9.27 to about 8.50. The bound for this condition is 8.27, which is very close to the real value. This suggests that our bound is very tight. To give an intuition for the ound, we also plot the curve of the bound as a function of the norm ¢ in Figure 5. AL _ dL dx; x AL 9% _Allxlle Ox; 0%; Ox; OX; O\|x\l2 Ox; (4) OL _= OL oe, i Lj Ox, Ilxlle
1704.06369#21
NormFace: L2 Hypersphere Embedding for Face Verification
Thanks to the recent developments of Convolutional Neural Networks, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differentiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve performance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%. Codes and models are released on https://github.com/happynear/NormFace
http://arxiv.org/pdf/1704.06369
Feng Wang, Xiang Xiang, Jian Cheng, Alan L. Yuille
cs.CV
camera-ready version
null
cs.CV
20170421
20170726
[ { "id": "1502.03167" }, { "id": "1611.08976" }, { "id": "1703.09507" }, { "id": "1511.02683" }, { "id": "1706.04264" }, { "id": "1607.06450" }, { "id": "1702.06890" } ]
1704.06440
21
One can straightforwardly verify by comparing terms that it satisfies Vows t=0 where 4; = (r¢ — T KLy) + WV (St41) — Vo(st)- (43) [Tez xQl(s.) = Q(s,a) +E So = 5,40 4 ; # 3.5 Soft Q-Learning The Boltzmann backup operators defined in the preceding section can be used to define practical variants of Q-learning that can be used with nonlinear function approximation. These methods, which optimize the entropy-augmented return, will be called soft Q-learning. Following Mnih et al. [2015], modern implemen- tations of Q-learning, and n-step Q-learning (see Mnih et al. [2016]) update the Q-function incrementally to compute the backup against a fixed target Q-function, which we’ll call Q. In the interval between each target network update, the algorithm is approximately performing the backup operation Q Q (1-step) or Q Q,nQ (n-step). To perform this approximate minimization, the algorithms minimize the least ← TπB squares loss [$(Q(s:, a) # m)?| , where # L(Q) = Et,st,at
1704.06440#21
Equivalence Between Policy Gradients and Soft Q-Learning
Two of the leading approaches for model-free reinforcement learning are policy gradient methods and $Q$-learning methods. $Q$-learning methods can be effective and sample-efficient when they work, however, it is not well-understood why they work, since empirically, the $Q$-values they estimate are very inaccurate. A partial explanation may be that $Q$-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between $Q$-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that "soft" (entropy-regularized) $Q$-learning is exactly equivalent to a policy gradient method. We also point out a connection between $Q$-learning methods and natural policy gradient methods. Experimentally, we explore the entropy-regularized versions of $Q$-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a $Q$-learning method that closely matches the learning dynamics of A3C without using a target network or $\epsilon$-greedy exploration schedule.
http://arxiv.org/pdf/1704.06440
John Schulman, Xi Chen, Pieter Abbeel
cs.LG
null
null
cs.LG
20170421
20181014
[ { "id": "1602.01783" }, { "id": "1702.08892" }, { "id": "1702.08165" }, { "id": "1611.01626" }, { "id": "1511.06581" }, { "id": "1512.08562" }, { "id": "1506.02438" } ]
1704.06369
22
It is noteworthy that vector x and 9£ are orthogonal with each other, i.e. (x, 34 of is the projection of 2 o£ onto the tangent space of the unit hypersphere at normal vector x (see Figure 4). From Figure 4 left, it can be inferred that after update, ||x||2 always increases. In order to prevent ||x||z growing infinitely, weight decay is necessary on vector x. £) = =0. From. a geometric perspective, the gradient After we obtain the bound, the solution to the convergence roblem is clear. By normalizing the features and columns of weight to a bigger value ¢ instead of 1, the softmax loss can continue to decrease. In practice, we may implement this by directly appending a scale layer after the cosine layer. The scale layer has only one learnable parameter s = ¢7. We may also fix it to a value that is large enough referring to Figure 5, say 20 or 30 for different class number. However, we prefer to make the parameter automatically learned y back-propagation instead of introducing a new hyper-parameter for elegance. Finally, the softmax loss with cosine distance is defined as 3.3 Reformulating Softmax Loss Using the normalization layer, we can directly optimize the cosine similarity, as Wai fi ai =) log ——_—_ Sa (6) m
1704.06369#22
NormFace: L2 Hypersphere Embedding for Face Verification
Thanks to the recent developments of Convolutional Neural Networks, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differentiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve performance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%. Codes and models are released on https://github.com/happynear/NormFace
http://arxiv.org/pdf/1704.06369
Feng Wang, Xiang Xiang, Jian Cheng, Alan L. Yuille
cs.CV
camera-ready version
null
cs.CV
20170421
20170726
[ { "id": "1502.03167" }, { "id": "1611.08976" }, { "id": "1703.09507" }, { "id": "1511.02683" }, { "id": "1706.04264" }, { "id": "1607.06450" }, { "id": "1702.06890" } ]
1704.06440
22
[$(Q(s:, a) # m)?| , where # L(Q) = Et,st,at yt = rt + γVQ(st+1) − 1-step Q-learning (44) (45) n=1 yt = τ KLt + d=0 γd(rt+d − τ KLt+d) + γnVQ(st+n) n-step Q-learning (46) n=1 =7TKL; +VQ(s1) + Ss Ota d=0 where 6; = (r¢ — TKLy) + YVa(st41) — Vo(se) (47) − − Q](st, at), regardless of what In one-step Q-learning (Equation (45)), yt is an unbiased estimator of [ behavior policy was used to collect the data. In n-step Q-learning (Equation (46)), for n > 1, yt is only an unbiased estimator of [ # TπB # 3.6 Policy Gradients Entropy regularization is often used in policy gradient algorithms, with gradient estimators of the form Ex,s:,a, | Vo log 79 (az | se) Ss ry —TVoDxx [76 || 7] (st) (48) v>t
1704.06440#22
Equivalence Between Policy Gradients and Soft Q-Learning
Two of the leading approaches for model-free reinforcement learning are policy gradient methods and $Q$-learning methods. $Q$-learning methods can be effective and sample-efficient when they work, however, it is not well-understood why they work, since empirically, the $Q$-values they estimate are very inaccurate. A partial explanation may be that $Q$-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between $Q$-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that "soft" (entropy-regularized) $Q$-learning is exactly equivalent to a policy gradient method. We also point out a connection between $Q$-learning methods and natural policy gradient methods. Experimentally, we explore the entropy-regularized versions of $Q$-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a $Q$-learning method that closely matches the learning dynamics of A3C without using a target network or $\epsilon$-greedy exploration schedule.
http://arxiv.org/pdf/1704.06440
John Schulman, Xi Chen, Pieter Abbeel
cs.LG
null
null
cs.LG
20170421
20181014
[ { "id": "1602.01783" }, { "id": "1702.08892" }, { "id": "1702.08165" }, { "id": "1611.01626" }, { "id": "1511.06581" }, { "id": "1512.08562" }, { "id": "1506.02438" } ]
1704.06369
23
as Wai fi ai =) log ——_—_ Sa (6) m f, Wj _4f, Wi) (5) liflllWille d(f, Wi) = where f is the feature and Wi represents the i-th column of the weight matrix of the inner-product layer before softmax loss layer. where ˜x is the normalized x. Loss Bound 0 2 4 6 8 10 12 14 16 18 20 Squared Norm (2 Figure 5: The softmax loss’ lower bound as a function of fea- tures and weights’ norm. Note that the x axis is the squared norm (” because we add the scale parameter directly on the cosine distance in practice.
1704.06369#23
NormFace: L2 Hypersphere Embedding for Face Verification
Thanks to the recent developments of Convolutional Neural Networks, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differentiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve performance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%. Codes and models are released on https://github.com/happynear/NormFace
http://arxiv.org/pdf/1704.06369
Feng Wang, Xiang Xiang, Jian Cheng, Alan L. Yuille
cs.CV
camera-ready version
null
cs.CV
20170421
20170726
[ { "id": "1502.03167" }, { "id": "1611.08976" }, { "id": "1703.09507" }, { "id": "1511.02683" }, { "id": "1706.04264" }, { "id": "1607.06450" }, { "id": "1702.06890" } ]
1704.06440
23
Ex,s:,a, | Vo log 79 (az | se) Ss ry —TVoDxx [76 || 7] (st) (48) v>t (Williams [1992], Mnih et al. [2016]). 6 However, these are not proper estimators of the entropy-augmented return τ KLt), since they don’t account for how actions affect entropy at future timesteps. Intuitively, one can think of the KL terms as a cost for “mental effort”. Equation (48) only accounts for the instantaneous effect of actions on mental effort, not delayed effects. To compute proper gradient estimators, we need to include the entropy terms in the return. We will define the discounted policy gradient in the following two equivalent ways—first, in terms of the empirical return; second, in terms of the value functions Vπ and Qπ:
1704.06440#23
Equivalence Between Policy Gradients and Soft Q-Learning
Two of the leading approaches for model-free reinforcement learning are policy gradient methods and $Q$-learning methods. $Q$-learning methods can be effective and sample-efficient when they work, however, it is not well-understood why they work, since empirically, the $Q$-values they estimate are very inaccurate. A partial explanation may be that $Q$-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between $Q$-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that "soft" (entropy-regularized) $Q$-learning is exactly equivalent to a policy gradient method. We also point out a connection between $Q$-learning methods and natural policy gradient methods. Experimentally, we explore the entropy-regularized versions of $Q$-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a $Q$-learning method that closely matches the learning dynamics of A3C without using a target network or $\epsilon$-greedy exploration schedule.
http://arxiv.org/pdf/1704.06440
John Schulman, Xi Chen, Pieter Abbeel
cs.LG
null
null
cs.LG
20170421
20181014
[ { "id": "1602.01783" }, { "id": "1702.08892" }, { "id": "1702.08165" }, { "id": "1611.01626" }, { "id": "1511.06581" }, { "id": "1512.08562" }, { "id": "1506.02438" } ]
1704.06369
24
4 REFORMULATING METRIC LEARNING Metric Learning, or specifically deep metric learning in this work, usually takes pairs or triplets of samples as input, and outputs the distance between them. In deep metric models, it is a common strategy to normalize the final features[22, 23, 28]. It seems that normalization does not cause any problems for metric learning loss functions. However, metric learning is more difficult to train than classification because the possible input pairs or triplets in metric 2) combinations for learning models are very large, namely O(N 3) combinations for triplets, where N is the amount pairs and O(N of training samples. It is almost impossible to deal with all possi- ble combinations during training, so sampling and hard mining algorithms are usually necessary[28], which are tricky and time- consuming. By contrast, in a classification task, we usually feed the data iteratively into the model, namely the input data is in order of O(N ). In this section, we attempt to reformulate some metric learning loss functions to do the classification task, while keeping their compatibility with the normalized features. The most widely used metric learning methods in the face verification community are the contrastive loss[31, 40], co={ HE - HI, cj = cj max(0, m — ||f; — £)||3). (7) ci # Cj and the triplet loss[23, 28],
1704.06369#24
NormFace: L2 Hypersphere Embedding for Face Verification
Thanks to the recent developments of Convolutional Neural Networks, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differentiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve performance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%. Codes and models are released on https://github.com/happynear/NormFace
http://arxiv.org/pdf/1704.06369
Feng Wang, Xiang Xiang, Jian Cheng, Alan L. Yuille
cs.CV
camera-ready version
null
cs.CV
20170421
20170726
[ { "id": "1502.03167" }, { "id": "1611.08976" }, { "id": "1703.09507" }, { "id": "1511.02683" }, { "id": "1706.04264" }, { "id": "1607.06450" }, { "id": "1702.06890" } ]
1704.06440
24
& 94 (7) =! t=0 d=1 | ico] » Vo log mo(az | 8¢)(Qx(Se, a2) — Va (St)) — TVeDxx [76 || 7] (se) t=0 » Vo log 7o(az | se) (~ + 0" (re¢a = TKLiga) — TVoD xx [700 || #1] (se) | J (50) In the special case of a finite-horizon problem—i.e., rt = KLt = 0 for all t T —the undiscounted (γ = 1) return is finite, and it is meaningful to compute its gradient. In this case, g1(πθ) equals the undiscounted policy gradient: gi(7) = VoE Sin -rkts] (51) t=0 This result is obtained directly by considering the stochastic computation graph for the loss (Schul- [2015a]), shown in the figure on the man et al. The edges from θ to the KL loss terms right. lead to the π] (st) terms in the gradi- ent; the edges to the stochastic actions at lead to the τ KLt+d) terms in the
1704.06440#24
Equivalence Between Policy Gradients and Soft Q-Learning
Two of the leading approaches for model-free reinforcement learning are policy gradient methods and $Q$-learning methods. $Q$-learning methods can be effective and sample-efficient when they work, however, it is not well-understood why they work, since empirically, the $Q$-values they estimate are very inaccurate. A partial explanation may be that $Q$-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between $Q$-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that "soft" (entropy-regularized) $Q$-learning is exactly equivalent to a policy gradient method. We also point out a connection between $Q$-learning methods and natural policy gradient methods. Experimentally, we explore the entropy-regularized versions of $Q$-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a $Q$-learning method that closely matches the learning dynamics of A3C without using a target network or $\epsilon$-greedy exploration schedule.
http://arxiv.org/pdf/1704.06440
John Schulman, Xi Chen, Pieter Abbeel
cs.LG
null
null
cs.LG
20170421
20181014
[ { "id": "1602.01783" }, { "id": "1702.08892" }, { "id": "1702.08165" }, { "id": "1611.01626" }, { "id": "1511.06581" }, { "id": "1512.08562" }, { "id": "1506.02438" } ]
1704.06369
25
and the triplet loss[23, 28], Le = max(0,m + |Ifi ~ fly ~ Ilfi ~ fella). ci = cj, C; # Ck, (8) where the two m’s are the margins. Both of the two loss functions optimize the normalized Euclidean distance between feature pairs. Note that after normalization, the reformulated softmax loss can imension 4 imension 4 @:feature :agent O:classcenter ——»: gradient Figure 6: Illustration of how the C-contrastive loss works with two classes on a 3-d sphere (projected on a 2-d plane). Left: The special case of m = 0. In this case, the agents are only influenced by features from their own classes. The agents will finally converge to the centers of their corre- sponding classes. Right: Normal case of m = 1. In this case, the agents are influenced by all the features in the same classes and other classes’ features in the margin. Hence the agents are shifted away from the boundary of the two classes. The features will follow their agents through the intra-class term ||f; — Willd. ci = j as the gradients shown in the figure. Best viewed in color. also be seen as optimizing the normalized Euclidean distance, # ˜ fi
1704.06369#25
NormFace: L2 Hypersphere Embedding for Face Verification
Thanks to the recent developments of Convolutional Neural Networks, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differentiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve performance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%. Codes and models are released on https://github.com/happynear/NormFace
http://arxiv.org/pdf/1704.06369
Feng Wang, Xiang Xiang, Jian Cheng, Alan L. Yuille
cs.CV
camera-ready version
null
cs.CV
20170421
20170726
[ { "id": "1502.03167" }, { "id": "1611.08976" }, { "id": "1703.09507" }, { "id": "1511.02683" }, { "id": "1706.04264" }, { "id": "1607.06450" }, { "id": "1702.06890" } ]
1704.06440
25
Since g1(πθ) computes the gradient of the entropy-regularized return, one interpretation of gγ(πθ) is that it is an approximation of the undiscounted policy gradient g1(πθ), but that it allows for lower-variance gradient estimators by ignoring some long-term dependencies. A different interpretation of gγ(π) is that it gives a gradient flow such that π∗ = πB As in the standard MDP setting, one can define approximations to gγ that use a value function to truncate the returns for variance reduction. These approximations can take the form of n-step methods (Mnih et al. [2016]) or TD(λ)-like methods (Schulman et al. [2015b]), though we will focus on n-step returns here. Based on the definition of gγ above, the natural choice of variance-reduced estimator is Et,s:.a. n-1 Vo log ma(at | st) Ss Ota (52) d=0 where δt was defined in Equation (36).
1704.06440#25
Equivalence Between Policy Gradients and Soft Q-Learning
Two of the leading approaches for model-free reinforcement learning are policy gradient methods and $Q$-learning methods. $Q$-learning methods can be effective and sample-efficient when they work, however, it is not well-understood why they work, since empirically, the $Q$-values they estimate are very inaccurate. A partial explanation may be that $Q$-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between $Q$-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that "soft" (entropy-regularized) $Q$-learning is exactly equivalent to a policy gradient method. We also point out a connection between $Q$-learning methods and natural policy gradient methods. Experimentally, we explore the entropy-regularized versions of $Q$-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a $Q$-learning method that closely matches the learning dynamics of A3C without using a target network or $\epsilon$-greedy exploration schedule.
http://arxiv.org/pdf/1704.06440
John Schulman, Xi Chen, Pieter Abbeel
cs.LG
null
null
cs.LG
20170421
20181014
[ { "id": "1602.01783" }, { "id": "1702.08892" }, { "id": "1702.08165" }, { "id": "1611.01626" }, { "id": "1511.06581" }, { "id": "1512.08562" }, { "id": "1506.02438" } ]
1704.06369
26
also be seen as optimizing the normalized Euclidean distance, # ˜ fi Win fi £s/=-7 >) log — are peti i (9) 1 elit, I 5 | M 9S Gt yr e FIRE because ||% — |? = 2 - 2x". Inspired by this formulation, we modify one of the features to be one column of a weight matrix We Raxn where d is the dimension of the feature and n is the class number. We call column W; as the ‘agent’ of the i-th class. The weight matrix W can be learned through back-propagation just as the inner-product layer. In this way, we can get a classification version of the contrastive loss, fi - Wil, ci =i L -| ‘las 8 en) Cr | max(0,m — [li — Wy), ci and the triplet loss, Ly, = max(0,m+||fj—Wyllp - llfi- Well), cr = i-cr #k. (11) To distinguish these two loss functions from their metric learning versions, we call them C-contrastive loss and C-triplet loss respec- tively, denoting that these loss functions are designed for classifica- tion.
1704.06369#26
NormFace: L2 Hypersphere Embedding for Face Verification
Thanks to the recent developments of Convolutional Neural Networks, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differentiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve performance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%. Codes and models are released on https://github.com/happynear/NormFace
http://arxiv.org/pdf/1704.06369
Feng Wang, Xiang Xiang, Jian Cheng, Alan L. Yuille
cs.CV
camera-ready version
null
cs.CV
20170421
20170726
[ { "id": "1502.03167" }, { "id": "1611.08976" }, { "id": "1703.09507" }, { "id": "1511.02683" }, { "id": "1706.04264" }, { "id": "1607.06450" }, { "id": "1702.06890" } ]
1704.06440
26
where δt was defined in Equation (36). The state-value function V we use in return oy ¥' (re — TKLz). We can fit V minimizing a squared-error loss he above formulas should approximate the entropy augmented iteratively by approximating the n-step backup V <+ 7,;"V, by L(V) = Era, [(V(s1) — )2], (53) [(V(s1) n-1 n-1 n-1 where y= Ss ria + ¥!V (St4a) = V (se) + Ss 1 t4a- (54) d=0 d=0 7 # 4 Soft Q-learning Gradient Equals Policy Gradient This section shows that the gradient of the squared-error loss from soft Q-learning (Section 3.5) equals the policy gradient (in the family of policy gradients described in Section 3.6) plus the gradient of a squared-error term for fitting the value function. We will not make any assumption about the parameterization of the Q-function, but we define Vθ and πθ as the following functions of the parameterized Q-function Qθ: Vθ(s) := τ log Ea [exp(Qθ(s, a)/τ )] (55)
1704.06440#26
Equivalence Between Policy Gradients and Soft Q-Learning
Two of the leading approaches for model-free reinforcement learning are policy gradient methods and $Q$-learning methods. $Q$-learning methods can be effective and sample-efficient when they work, however, it is not well-understood why they work, since empirically, the $Q$-values they estimate are very inaccurate. A partial explanation may be that $Q$-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between $Q$-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that "soft" (entropy-regularized) $Q$-learning is exactly equivalent to a policy gradient method. We also point out a connection between $Q$-learning methods and natural policy gradient methods. Experimentally, we explore the entropy-regularized versions of $Q$-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a $Q$-learning method that closely matches the learning dynamics of A3C without using a target network or $\epsilon$-greedy exploration schedule.
http://arxiv.org/pdf/1704.06440
John Schulman, Xi Chen, Pieter Abbeel
cs.LG
null
null
cs.LG
20170421
20181014
[ { "id": "1602.01783" }, { "id": "1702.08892" }, { "id": "1702.08165" }, { "id": "1611.01626" }, { "id": "1511.06581" }, { "id": "1512.08562" }, { "id": "1506.02438" } ]
1704.06369
27
Intuitively, Wj acts as a summarizer of the features in j-th class. If all classes are well-separated by the margin, the Wj ’s will roughly correspond to the means of features in each class (Figure 6 left). In more complicated tasks, features of different classes may be overlapped with each other. Then the Wj ’s will be shifted away from the boundaries. The marginal features (hard examples) are contrastive loss: triplet loss: e: feature X: agent >———«: minimize <—>: maximize Figure 7: Classification version of contrastive loss (Left) and triplet loss (Right). The shadowed points are the marginal features that got omitted due to the ‘agent’ strategy. In the original version of the two losses, the shadowed points are also optimized. Best viewed in color. guided to have bigger gradients in this case (Figure 6 right), which means they move further than easier samples during update. However, there are some side effect of the agent strategy. After reformulation, some of the marginal features may not be optimized if we still use the same margin as the original version (Figure 7). Thus, we need larger margins to make more features get optimized. Mathematically, the error caused by the agent approximation is given by the following proposition.
1704.06369#27
NormFace: L2 Hypersphere Embedding for Face Verification
Thanks to the recent developments of Convolutional Neural Networks, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differentiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve performance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%. Codes and models are released on https://github.com/happynear/NormFace
http://arxiv.org/pdf/1704.06369
Feng Wang, Xiang Xiang, Jian Cheng, Alan L. Yuille
cs.CV
camera-ready version
null
cs.CV
20170421
20170726
[ { "id": "1502.03167" }, { "id": "1611.08976" }, { "id": "1703.09507" }, { "id": "1511.02683" }, { "id": "1706.04264" }, { "id": "1607.06450" }, { "id": "1702.06890" } ]
1704.06440
27
Vθ(s) := τ log Ea [exp(Qθ(s, a)/τ )] (55) πθ(a s) := π(a s) exp((Qθ(s, a) Vθ(s))/τ ) | | − Here, πθ is the Boltzmann policy for Qθ, and Vθ is the normalizing factor we described above. From these definitions, it follows that the Q-function can be written as Qθ(s, a) = Vθ(s) + τ log πθ(a | s) π(a | s) (57) We will substitute this expression into the squared-error loss function. First, for convenience, let us define Ae = ng Moe4aNow, let’s consider the gradient of the n-step soft Q-learning objective: = 1 2 VoEt,s:a.7 | 31|Qo(se, ae) — yell | | me ∇ swap gradient and expectation, treating state-action distribution as fixed: # = Et,st,at∼π [ ∇ # θQθ(st, at)(Qθ(st, at) — − # Yt)] # |p —aey
1704.06440#27
Equivalence Between Policy Gradients and Soft Q-Learning
Two of the leading approaches for model-free reinforcement learning are policy gradient methods and $Q$-learning methods. $Q$-learning methods can be effective and sample-efficient when they work, however, it is not well-understood why they work, since empirically, the $Q$-values they estimate are very inaccurate. A partial explanation may be that $Q$-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between $Q$-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that "soft" (entropy-regularized) $Q$-learning is exactly equivalent to a policy gradient method. We also point out a connection between $Q$-learning methods and natural policy gradient methods. Experimentally, we explore the entropy-regularized versions of $Q$-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a $Q$-learning method that closely matches the learning dynamics of A3C without using a target network or $\epsilon$-greedy exploration schedule.
http://arxiv.org/pdf/1704.06440
John Schulman, Xi Chen, Pieter Abbeel
cs.LG
null
null
cs.LG
20170421
20181014
[ { "id": "1602.01783" }, { "id": "1702.08892" }, { "id": "1702.08165" }, { "id": "1611.01626" }, { "id": "1511.06581" }, { "id": "1512.08562" }, { "id": "1506.02438" } ]
1704.06369
28
Proposition 3. Using an agent for each class instead of a specific sample would cause a distortion of ne Lye, (d(fo, fi) - d(fo, wi)’, where Wj is the agent of the ith-class. The distortion is bounded by na Dec, 4Cf. Wi). # 1 nCi The proof is given in Appendix 8.3. This bound gives us a theoret- ical guidance of setting the margins. We can compute it on-the-fly during training using moving-average and display it to get better feelings about the progress. Empirically, the bound 1 nCi is usually 0.5 ∼ 0.6. The recommendation value of the margins of the modified contrastive loss and triplet loss is 1 and 0.8 respec- tively. Note that setting the margin used to be complicated work[40]. Following their work, we have to suspend training and search for a new margin for every several epochs. However, we no longer need to perform such a searching algorithm after applying normalization. Through normalization, the scale of features’ magnitude is fixed, which makes it possible to fix the margin, too. In this strategy, we will not try to train models using the C-contrastive loss or the C-triplet loss without normalization because this is difficult.
1704.06369#28
NormFace: L2 Hypersphere Embedding for Face Verification
Thanks to the recent developments of Convolutional Neural Networks, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differentiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve performance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%. Codes and models are released on https://github.com/happynear/NormFace
http://arxiv.org/pdf/1704.06369
Feng Wang, Xiang Xiang, Jian Cheng, Alan L. Yuille
cs.CV
camera-ready version
null
cs.CV
20170421
20170726
[ { "id": "1502.03167" }, { "id": "1611.08976" }, { "id": "1703.09507" }, { "id": "1511.02683" }, { "id": "1706.04264" }, { "id": "1607.06450" }, { "id": "1702.06890" } ]
1704.06440
28
∇ # θQθ(st, at)(Qθ(st, at) — − # Yt)] # |p —aey replace Qθ using Equation (57), and replace Q-value backup yt by Equation (46): = Er.syarrn |V0Qo( se, ax) (7 log E4424 + Vo(se) — (Vol st) + Da [no || 7] (82) + 44))]| F(a se) (60) T=T9 cancel out Vo(s): Er.srainn | VoQo (sr, ae)(7 log $4424 — rDxcx [ro || 7] (sx) — Av)]| (61) T=T9 replace the other Qg by Equation (57): replace the other Qθ by Equation (57): = Et.s,,a,~| (7 Vo log m9(ax | 82) + VoVa(se)) « (rlog B12. — Dic, [ro |) 7] (Se) — Av)| | Wat | se) (62) T=T9 expand out terms:
1704.06440#28
Equivalence Between Policy Gradients and Soft Q-Learning
Two of the leading approaches for model-free reinforcement learning are policy gradient methods and $Q$-learning methods. $Q$-learning methods can be effective and sample-efficient when they work, however, it is not well-understood why they work, since empirically, the $Q$-values they estimate are very inaccurate. A partial explanation may be that $Q$-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between $Q$-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that "soft" (entropy-regularized) $Q$-learning is exactly equivalent to a policy gradient method. We also point out a connection between $Q$-learning methods and natural policy gradient methods. Experimentally, we explore the entropy-regularized versions of $Q$-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a $Q$-learning method that closely matches the learning dynamics of A3C without using a target network or $\epsilon$-greedy exploration schedule.
http://arxiv.org/pdf/1704.06440
John Schulman, Xi Chen, Pieter Abbeel
cs.LG
null
null
cs.LG
20170421
20181014
[ { "id": "1602.01783" }, { "id": "1702.08892" }, { "id": "1702.08165" }, { "id": "1611.01626" }, { "id": "1511.06581" }, { "id": "1512.08562" }, { "id": "1506.02438" } ]
1704.06369
29
5 EXPERIMENT In this section, we first describe the experiment settings in Section 5.1. Then we evaluate our method on two different datasets with two different models in Section 5.2 and 5.3. Codes and models are released at https://github.com/happynear/NormFace. 5.1 Implementation Details Baseline works. To verify our algorithm’s universality, we choose two works as our baseline, Wu et. al.’s model [38]2 (Wu’s model, 2https://github.com/AlfredXiangWu/face_verification_experiment
1704.06369#29
NormFace: L2 Hypersphere Embedding for Face Verification
Thanks to the recent developments of Convolutional Neural Networks, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differentiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve performance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%. Codes and models are released on https://github.com/happynear/NormFace
http://arxiv.org/pdf/1704.06369
Feng Wang, Xiang Xiang, Jian Cheng, Alan L. Yuille
cs.CV
camera-ready version
null
cs.CV
20170421
20170726
[ { "id": "1502.03167" }, { "id": "1611.08976" }, { "id": "1703.09507" }, { "id": "1511.02683" }, { "id": "1706.04264" }, { "id": "1607.06450" }, { "id": "1702.06890" } ]
1704.06440
29
To(ae | se) (ae | st) = Ex,s,,a,~n|T’ Vo log 79 (at | 81) log — 1? Vo log m6 (ae | 82)Ducx [re || 7] (se) — FO (*) — TV 9 log mo (a | 81)Ar + TV @Vo(sz) log lords) — TVoVo(s1)Dxx [76 || 7] (s2) —VoVols1)4r] |-n (63) a (**) (*) vanishes because Eg. 79(. | s,) [Vo log 6 (at | s) - const] = 0 (**) vanishes because Eqwg(- | s,) [see? | = Dyx [ro || 7 (se) (64) T=T9 Exsxayrn [T? VoD [ro || Fi] (51) + 0 — TV o log moar | 51) At +0 — VoVol(sx)u] | rearrange terms: # ∗∗ = Et,st,at∼π
1704.06440#29
Equivalence Between Policy Gradients and Soft Q-Learning
Two of the leading approaches for model-free reinforcement learning are policy gradient methods and $Q$-learning methods. $Q$-learning methods can be effective and sample-efficient when they work, however, it is not well-understood why they work, since empirically, the $Q$-values they estimate are very inaccurate. A partial explanation may be that $Q$-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between $Q$-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that "soft" (entropy-regularized) $Q$-learning is exactly equivalent to a policy gradient method. We also point out a connection between $Q$-learning methods and natural policy gradient methods. Experimentally, we explore the entropy-regularized versions of $Q$-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a $Q$-learning method that closely matches the learning dynamics of A3C without using a target network or $\epsilon$-greedy exploration schedule.
http://arxiv.org/pdf/1704.06440
John Schulman, Xi Chen, Pieter Abbeel
cs.LG
null
null
cs.LG
20170421
20181014
[ { "id": "1602.01783" }, { "id": "1702.08892" }, { "id": "1702.08165" }, { "id": "1611.01626" }, { "id": "1511.06581" }, { "id": "1512.08562" }, { "id": "1506.02438" } ]
1704.06369
30
for short) and Wen et. al.’s model [36]3 (Wen’s model, for short). Wu’s model is a 10-layer plain CNN with Maxout[6] activation unit. Wen’s model is a 28-layer ResNet[7] trained with both softmax loss and center loss. Neither of these two models apply feature normalization or weight normalization. We strictly follow all the experimental settings as their papers, including the datasets4, the image resolution, the pre-processing methods and the evaluation criteria. Training. The proposed loss functions are appended after the fea- ture layer, i.e. the second last inner-product layer. The features and columns of weight matrix are normalized to make their L2 norm to be 1. Then the features and columns of the weight matrix are sent into a pairwise distance layer, i.e. inner-product layer to pro- duce a cosine similarity or Euclidean distance layer to produce a normalized Euclidean distance. After calculating all the similarities or distances between each feature and each column, the proposed loss functions will give the final loss and gradients to the distances. The whole network models are trained end to end. To speed up the training procedure, we
1704.06369#30
NormFace: L2 Hypersphere Embedding for Face Verification
Thanks to the recent developments of Convolutional Neural Networks, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differentiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve performance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%. Codes and models are released on https://github.com/happynear/NormFace
http://arxiv.org/pdf/1704.06369
Feng Wang, Xiang Xiang, Jian Cheng, Alan L. Yuille
cs.CV
camera-ready version
null
cs.CV
20170421
20170726
[ { "id": "1502.03167" }, { "id": "1611.08976" }, { "id": "1703.09507" }, { "id": "1511.02683" }, { "id": "1706.04264" }, { "id": "1607.06450" }, { "id": "1702.06890" } ]
1704.06440
30
# ∗∗ = Et,st,at∼π (65) T=T9 2 = Et.s,ay~n | —7V9 log ro(ar | se) Ae + 7?°VoDux [ro ll FIl(se) + Vod||VoCse) — Vil) JI Ke , SS policy grad value function grad Note that the equivalent policy gradient method multiplies the policy gradient by a factor of τ , relative to the value function error. Effectively, the value function error has a coefficient of τ −1, which is larger than what is typically used in practice (Mnih et al. [2016]). We will analyze this choice of coefficient in the experiments. 8 (56) (58) (59) (61) # 5 Soft Q-learning and Natural Policy Gradients The previous section gave a first-order view on the equivalence between policy gradients and soft Q-learning; this section gives a second-order, coordinate-free view. As previous work has pointed out, the natural gradient is the solution to a regression problem; here we will explore the relation between that problem and the nonlinear regression in soft Q-learning.
1704.06440#30
Equivalence Between Policy Gradients and Soft Q-Learning
Two of the leading approaches for model-free reinforcement learning are policy gradient methods and $Q$-learning methods. $Q$-learning methods can be effective and sample-efficient when they work, however, it is not well-understood why they work, since empirically, the $Q$-values they estimate are very inaccurate. A partial explanation may be that $Q$-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between $Q$-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that "soft" (entropy-regularized) $Q$-learning is exactly equivalent to a policy gradient method. We also point out a connection between $Q$-learning methods and natural policy gradient methods. Experimentally, we explore the entropy-regularized versions of $Q$-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a $Q$-learning method that closely matches the learning dynamics of A3C without using a target network or $\epsilon$-greedy exploration schedule.
http://arxiv.org/pdf/1704.06440
John Schulman, Xi Chen, Pieter Abbeel
cs.LG
null
null
cs.LG
20170421
20181014
[ { "id": "1602.01783" }, { "id": "1702.08892" }, { "id": "1702.08165" }, { "id": "1611.01626" }, { "id": "1511.06581" }, { "id": "1512.08562" }, { "id": "1506.02438" } ]
1704.06369
31
the proposed loss functions will give the final loss and gradients to the distances. The whole network models are trained end to end. To speed up the training procedure, we fine-tune the networks from the baseline models. Thus, a relatively small learning rate, say 1e-4 for Wu’s model and 1e-3 for Wen’s model, are applied to update the network through stochastic gradient descent (SGD) with momentum of 0.9. Evaluation. Two datasets are utilized to evaluate the performance, one is Labeled Face in the Wild (LFW)[10] and another is Youtube Face (YTF)[37]. 10-fold validation is used to evaluate the perfor- mance for both datasets. After the training models converge, we continue to train them for 5, 000 iterations5, during which we save a snapshot for every 1, 000 iterations. Then we run the evaluation codes on the five saved snapshots separately and calculate an av- erage score to reduce disturbance. We extract features from both the frontal face and its mirror image and merge the two features by element-wise summation. Principle Component Analysis (PCA) is then applied on the training subset of the evaluation dataset
1704.06369#31
NormFace: L2 Hypersphere Embedding for Face Verification
Thanks to the recent developments of Convolutional Neural Networks, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differentiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve performance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%. Codes and models are released on https://github.com/happynear/NormFace
http://arxiv.org/pdf/1704.06369
Feng Wang, Xiang Xiang, Jian Cheng, Alan L. Yuille
cs.CV
camera-ready version
null
cs.CV
20170421
20170726
[ { "id": "1502.03167" }, { "id": "1611.08976" }, { "id": "1703.09507" }, { "id": "1511.02683" }, { "id": "1706.04264" }, { "id": "1607.06450" }, { "id": "1702.06890" } ]
1704.06440
31
The natural gradient is defined as F~1!g, where F is the average Fisher information matrix, F = Es,axn [(Vo log m6(a| s))? (Vo log 79(a| s))], and g is the policy gradient estimate g « E[V¢ log m9(a| s)A], where A is an estimate of the advantage function. As pointed out by Kakade [2002], the natural gradient step can be computed as the solution to a least squares problem. Given timesteps t = 1,2,...,7, define a, = Vo log 79(a; | s;). Define © as the matrix whose t'" row is ;, let A denote the vector whose t*® ele- ment is the advantage estimate A;, and let € denote a scalar stepsize parameter. Consider the least squares problem <4 _ 2 a min }|\Yw — eA (66) The least-squares solution is w = e(W7W)-!W7A. Note that E [wre] is the Fisher information matrix F’, and E [wrA] is the policy gradient g, so w is the estimated natural gradient.
1704.06440#31
Equivalence Between Policy Gradients and Soft Q-Learning
Two of the leading approaches for model-free reinforcement learning are policy gradient methods and $Q$-learning methods. $Q$-learning methods can be effective and sample-efficient when they work, however, it is not well-understood why they work, since empirically, the $Q$-values they estimate are very inaccurate. A partial explanation may be that $Q$-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between $Q$-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that "soft" (entropy-regularized) $Q$-learning is exactly equivalent to a policy gradient method. We also point out a connection between $Q$-learning methods and natural policy gradient methods. Experimentally, we explore the entropy-regularized versions of $Q$-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a $Q$-learning method that closely matches the learning dynamics of A3C without using a target network or $\epsilon$-greedy exploration schedule.
http://arxiv.org/pdf/1704.06440
John Schulman, Xi Chen, Pieter Abbeel
cs.LG
null
null
cs.LG
20170421
20181014
[ { "id": "1602.01783" }, { "id": "1702.08892" }, { "id": "1702.08165" }, { "id": "1611.01626" }, { "id": "1511.06581" }, { "id": "1512.08562" }, { "id": "1506.02438" } ]
1704.06369
33
5.2 Experiments on LFW The LFW dataset[10] contains 13, 233 images from 5, 749 identi- ties, with large variations in pose, expression and illumination. All the images are collected from Internet. We evaluate our methods through two different protocols on LFW, one is the standard unre- stricted with labeled outside data [9], which is evaluated on 6, 000 image pairs, and another is BLUFR [15] which utilize all 13, 233 images. It is noteworthy that there are three same identities in CASIA-Webface[40] and LFW[10]. We delete them during training to build a complete open-set validation. We carefully test almost all combinations of the loss functions on the standard unrestricted with labeled outside data protocol. The results are listed in Table 2. Cosine similarity is used by softmax + any loss functions. The distance used by C-contrastive and C-triplet loss is the squared normalized Euclidean distance. The C-triplet
1704.06369#33
NormFace: L2 Hypersphere Embedding for Face Verification
Thanks to the recent developments of Convolutional Neural Networks, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differentiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve performance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%. Codes and models are released on https://github.com/happynear/NormFace
http://arxiv.org/pdf/1704.06369
Feng Wang, Xiang Xiang, Jian Cheng, Alan L. Yuille
cs.CV
camera-ready version
null
cs.CV
20170421
20170726
[ { "id": "1502.03167" }, { "id": "1611.08976" }, { "id": "1703.09507" }, { "id": "1511.02683" }, { "id": "1706.04264" }, { "id": "1607.06450" }, { "id": "1702.06890" } ]
1704.06440
33
Thus, we can interpret the least squares problem (Equation (66)) as solving T min >? 3 (log 79 (a, | s,) — log m,,, (at | 8.) — eA,)? (68) t=1 That is, we are adjusting each log-probility log 79,,, (az | s:) by the advantage function Ay;, scaled by e. In entropy-regularized reinforcement learning, we have an additional term for the gradient of the KL- divergence: # g # E[Vo θ log πθ(at τ s:) Ar TVo KL [79,77] (s2)] r[log( e429) ∝ = E | − ∆t =E [Ve log 7(at | si)(Ai - r[log( e429) — KL[ro, 7 (s:)]) | (70) where the second line used the formula for the KL-divergence (Equation (21)) and the identity that Eat∼πθ [ st) least squares problem (to compute F −1g) is min } (log mo (a, | 84) — log 79,,, (ae | se) — (A: - 7 [log ( S412?) — KU fre, 7(s0)})) - (71) t=1
1704.06440#33
Equivalence Between Policy Gradients and Soft Q-Learning
Two of the leading approaches for model-free reinforcement learning are policy gradient methods and $Q$-learning methods. $Q$-learning methods can be effective and sample-efficient when they work, however, it is not well-understood why they work, since empirically, the $Q$-values they estimate are very inaccurate. A partial explanation may be that $Q$-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between $Q$-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that "soft" (entropy-regularized) $Q$-learning is exactly equivalent to a policy gradient method. We also point out a connection between $Q$-learning methods and natural policy gradient methods. Experimentally, we explore the entropy-regularized versions of $Q$-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a $Q$-learning method that closely matches the learning dynamics of A3C without using a target network or $\epsilon$-greedy exploration schedule.
http://arxiv.org/pdf/1704.06440
John Schulman, Xi Chen, Pieter Abbeel
cs.LG
null
null
cs.LG
20170421
20181014
[ { "id": "1602.01783" }, { "id": "1702.08892" }, { "id": "1702.08165" }, { "id": "1611.01626" }, { "id": "1511.06581" }, { "id": "1512.08562" }, { "id": "1506.02438" } ]
1704.06369
34
3https://github.com/ydwen/caffe-face 4 Since the identity label of the Celebrity+[18] dataset is not publicly available, we follow Wen’s released model which is trained on CASIA-Webface [40] only. Wu’s model is also trained on CASIA-Webface [40] only. 5In each iteration we train 256 samples, i.e. the batch size is 256. faces in video 1 fi 1 EEEEEE ost osm ol histogramming 92) 10s: || 02905 |f 0072 |] oxses |] 1850 |] ose Q — fed) — Zoapin.ut sare} our || 0352 | 00625 |] os00 |] oscr | esa og face similarities histogram feature same identity / 008 4 ~~ different identity o 99.25 Accuracy 99 =F sottmax + C-contrastive )—Esottmax + center 3 sottmax only 98.95} |—$-c.contrastve only baseline 98.9 10% 10? 10° 10? Loss Weight Figure 8: LFW accuracies as a function of the loss weight of C-contrastive loss or center loss with error bars. All these methods use the normalization strategy except for the base- line.
1704.06369#34
NormFace: L2 Hypersphere Embedding for Face Verification
Thanks to the recent developments of Convolutional Neural Networks, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differentiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve performance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%. Codes and models are released on https://github.com/happynear/NormFace
http://arxiv.org/pdf/1704.06369
Feng Wang, Xiang Xiang, Jian Cheng, Alan L. Yuille
cs.CV
camera-ready version
null
cs.CV
20170421
20170726
[ { "id": "1502.03167" }, { "id": "1611.08976" }, { "id": "1703.09507" }, { "id": "1511.02683" }, { "id": "1706.04264" }, { "id": "1607.06450" }, { "id": "1702.06890" } ]
1704.06440
34
Now let’s consider Q-learning. Let’s assume that the value function is unchanged by optimization, so Vθ = Vθold. (Otherwise, the equivalence will not hold, since the value function will try to explain the measured advantage ∆, shrinking the advantage update.) 3(Qo(sr, a4) — ws) = $((Vo(si,ar) + log (Gels) ) — (Vana (se) +7 KU[r 9,14, 7] (82) + Ay) (72) = 3 (rlog (Beye?) - (Ar 4 FKL fro, 7(st))) (73) st) + ∆t/τ + KL[πθold , π](st). This loss is not Evidently, we are regressing log πθ(at equivalent to the natural policy gradient loss that we obtained above. st) towards log πθold (at | | 9 (69) We can recover the natural policy gradient by instead solving a damped version of the Q-function regres- sion problem. Define Qf = (1 — €)Qo.,4(S¢, a) + €Q), ie., we are interpolating between the old value and the backed-up value.
1704.06440#34
Equivalence Between Policy Gradients and Soft Q-Learning
Two of the leading approaches for model-free reinforcement learning are policy gradient methods and $Q$-learning methods. $Q$-learning methods can be effective and sample-efficient when they work, however, it is not well-understood why they work, since empirically, the $Q$-values they estimate are very inaccurate. A partial explanation may be that $Q$-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between $Q$-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that "soft" (entropy-regularized) $Q$-learning is exactly equivalent to a policy gradient method. We also point out a connection between $Q$-learning methods and natural policy gradient methods. Experimentally, we explore the entropy-regularized versions of $Q$-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a $Q$-learning method that closely matches the learning dynamics of A3C without using a target network or $\epsilon$-greedy exploration schedule.
http://arxiv.org/pdf/1704.06440
John Schulman, Xi Chen, Pieter Abbeel
cs.LG
null
null
cs.LG
20170421
20181014
[ { "id": "1602.01783" }, { "id": "1702.08892" }, { "id": "1702.08165" }, { "id": "1611.01626" }, { "id": "1511.06581" }, { "id": "1512.08562" }, { "id": "1506.02438" } ]
1704.06369
35
Table 2: Results on LFW 6,000 pairs using Wen’s model[36] Figure 9: (a): Illustration of how to generate a histogram feature for a pair of videos. We firstly create a pairwise score matrix by computing the cosine similarity between two face images from different video sequences. Then we ac- cumulate all the scores in the matrix to create a histogram. (b): Visualization of histogram features extracted from 200 video pairs with both same identities and different identi- ties. After collecting all histogram features, support vector machine (SVM) using histogram intersection kernel(HIK) is utilized to make a binary classification. loss function Normalization Accuracy softmax softmax + dropout softmax + center[36] softmax softmax softmax softmax + center C-contrasitve C-triplet C-triplet + center softmax + C-contrastive No No No feature only weight only Yes Yes Yes Yes Yes Yes 98.28% 98.35% 99.03% 98.72% 98.95% 99.16% ± 0.025% 99.17% ± 0.017% 99.15% ± 0.017% 99.11% ± 0.008% 99.13% ± 0.017% 99.19% ± 0.008%
1704.06369#35
NormFace: L2 Hypersphere Embedding for Face Verification
Thanks to the recent developments of Convolutional Neural Networks, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differentiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve performance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%. Codes and models are released on https://github.com/happynear/NormFace
http://arxiv.org/pdf/1704.06369
Feng Wang, Xiang Xiang, Jian Cheng, Alan L. Yuille
cs.CV
camera-ready version
null
cs.CV
20170421
20170726
[ { "id": "1502.03167" }, { "id": "1611.08976" }, { "id": "1703.09507" }, { "id": "1511.02683" }, { "id": "1706.04264" }, { "id": "1607.06450" }, { "id": "1702.06890" } ]
1704.06440
35
sion problem. Define Qf backed-up value. Qi = (1 = €)Qoara (se, a0) Qi = (1 = €)Qoara (se, a0) + €Qt = Qonra (Se, ae) + €(Qt — Qoera(Ses a4) (74) Qt — Qoara (S15 a2) = (Vase) + 7 KU [toque 7H(S1) + At) — (Vaua(se) + 7 log (Hawise) ) ) (75) = Art 7][KL[re,4,7l(s1) — log (Hes) (76) Qo(se- ar) ~ QF = Qolsi.a2) ~ (Qaua(se.ae) + €(Qe ~ Qo. (Se ae))) (77) = Vo(s1) + log (ee?) — {Vaua(se) + log (a5) + (A +7 | KD fro... 71(s2) — log (ae?) |) } = log m6(az | 8) — log To, (ae | $2) — (A - [log (ae?) - KL [roa 7)(se)]) (78)
1704.06440#35
Equivalence Between Policy Gradients and Soft Q-Learning
Two of the leading approaches for model-free reinforcement learning are policy gradient methods and $Q$-learning methods. $Q$-learning methods can be effective and sample-efficient when they work, however, it is not well-understood why they work, since empirically, the $Q$-values they estimate are very inaccurate. A partial explanation may be that $Q$-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between $Q$-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that "soft" (entropy-regularized) $Q$-learning is exactly equivalent to a policy gradient method. We also point out a connection between $Q$-learning methods and natural policy gradient methods. Experimentally, we explore the entropy-regularized versions of $Q$-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a $Q$-learning method that closely matches the learning dynamics of A3C without using a target network or $\epsilon$-greedy exploration schedule.
http://arxiv.org/pdf/1704.06440
John Schulman, Xi Chen, Pieter Abbeel
cs.LG
null
null
cs.LG
20170421
20181014
[ { "id": "1602.01783" }, { "id": "1702.08892" }, { "id": "1702.08165" }, { "id": "1611.01626" }, { "id": "1511.06581" }, { "id": "1512.08562" }, { "id": "1506.02438" } ]