doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1611.01368 | 36 | # 6 Additional Experiments
recurrent networks: Comparison to simple How much of the success of the network is due to the LSTM cells? We repeated the number prediction experiment with a simple recurrent network (SRN) (Elman, 1990), with the same number of hidden units. The SRNâs performance was inferior to the LSTMâs, but the average performance for a given
13One technical exception was that we did not replace low- frequency words with their part-of-speech, since the Google LM is a large-vocabulary language model, and does not have parts-of-speech as part of its vocabulary.
number of agreement attractors does not suggest a qualitative difference between the cell types: the SRN makes about twice as many errors as the LSTM across the board (Figure 4d).
Training only on difï¬cult dependencies: Only a small proportion of the dependencies in the corpus had agreement attractors (Figure 2e). Would the network generalize better if dependencies with in- tervening nouns were emphasized during training? We repeated our number prediction experiment, this time training the model only on dependencies with at least one intervening noun (of any number). We doubled the proportion of training sentences to 20%, since the total size of the corpus was smaller (226K dependencies). | 1611.01368#36 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01368 | 37 | This training regime resulted in a 27% decrease in error rate on dependencies with exactly one attractor (from 4.1% to 3.0%). This decrease is statistically signiï¬cant, and encouraging given that total number of dependencies in training was much lower, which complicates the learning of word embeddings. Error rates mildly decreased in dependencies with more attractors as well, suggesting some generalization (Figure 4d). Surprisingly, a similar experiment us- ing the grammaticality judgment task led to a slight increase in error rate. While tentative at this point, these results suggest that oversampling difï¬cult train- ing cases may be beneï¬cial; a curriculum progressing from easier to harder dependencies (Elman, 1993) may provide additional gains.
# 7 Error Analysis
Singular vs. plural subjects: Most of the nouns in English are singular: in our corpus, the fraction of singular subjects is 68%. Agreement attraction errors in humans are much more common when the attractor is plural than when it is singular (Bock and Miller, 1991; Eberhard et al., 2005). Do our modelsâ error rates depend on the number of the subject? | 1611.01368#37 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01368 | 38 | As Figure 2b shows, our LSTM number prediction model makes somewhat more agreement attraction errors with plural than with singular attractors; the difference is statistically signiï¬cant, but the asymme- try is much less pronounced than in humans. Inter- estingly, the SRN version of the model does show a large asymmetry, especially as the count of attractors increases; with four plural attractors the error rate
reaches 60% (Figure 4e).
Qualitative analysis: We manually examined a sample of 200 cases in which the majority of the 20 runs of the number prediction network made the wrong prediction. There were only 8890 such depen- dencies (about 0.6%). Many of those were straight- forward agreement attraction errors; others were dif- ï¬cult to interpret. We mention here three classes of errors that can motivate future experiments.
The networks often misidentiï¬ed the heads of noun-noun compounds. In (17), for example, the models predict a singular verb even though the num- ber of the subject conservation refugees should be determined by its head refugees. This suggests that the networks didnât master the structure of English noun-noun compounds.14
Conservation refugees live in a world col- ored in shades of gray; limbo.
Information technology (IT) assets com- monly hold large volumes of conï¬dential data. | 1611.01368#38 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01368 | 39 | Conservation refugees live in a world col- ored in shades of gray; limbo.
Information technology (IT) assets com- monly hold large volumes of conï¬dential data.
Some verbs that are ambiguous with plural nouns seem to have been misanalyzed as plural nouns and consequently act as attractors. The models predicted a plural verb in the following two sentences even though neither of them has any plural nouns, possibly because of the ambiguous verbs drives and lands:
The ship that the player drives has a very high speed.
It was also to be used to learn if the area where the lander lands is typical of the sur- rounding terrain.
Other errors appear to be due to difï¬culty not in identifying the subject but in determining whether it is plural or singular. In Example (22), in particular, there is very little information in the left context of the subject 5 paragraphs suggesting that the writer considers it to be singular:
Rabaul-based Japanese aircraft make three dive-bombing attacks.
14The dependencies are presented as they appeared in the corpus; the predicted number was the opposite of the correct one (e.g., singular in (17), where the original is plural).
The lead is also rather long; 5 paragraphs is pretty lengthy for a 62 kilobyte article. | 1611.01368#39 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01368 | 40 | The lead is also rather long; 5 paragraphs is pretty lengthy for a 62 kilobyte article.
The last errors point to a limitation of the number prediction task, which jointly evaluates the modelâs ability to identify the subject and its ability to assign the correct number to noun phrases.
# 8 Related Work
The majority of NLP work on neural networks eval- uates them on their performance in a task such as language modeling or machine translation (Sunder- meyer et al., 2012; Bahdanau et al., 2015). These evaluation setups average over many different syn- tactic constructions, making it difï¬cult to isolate the networkâs syntactic capabilities.
Other studies have tested the capabilities of RNNs to learn simple artiï¬cial languages. Gers and Schmid- huber (2001) showed that LSTMs can learn the context-free language anbn, generalizing to ns as high as 1000 even when trained only on n â {1, . . . , 10}. Simple recurrent networks struggled with this language (Rodriguez et al., 1999; Rodriguez, 2001). These results have been recently replicated and extended by Joulin and Mikolov (2015). | 1611.01368#40 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01368 | 41 | Elman (1991) tested an SRN on a miniature lan- guage that simulated English relative clauses, and found that the network was only able to learn the language under highly speciï¬c circumstances (El- man, 1993), though later work has called some of his conclusions into question (Rohde and Plaut, 1999; Cartling, 2008). Frank et al. (2013) studied the ac- quisition of anaphora coreference by SRNs, again in a miniature language. Recently, Bowman et al. (2015) tested the ability of LSTMs to learn an artiï¬- cial language based on propositional logic. As in our study, the performance of the network degraded as the complexity of the test sentences increased.
Karpathy et al. (2016) present analyses and visual- ization methods for character-level RNNs. K´ad´ar et al. (2016) and Li et al. (2016) suggest visualization techniques for word-level RNNs trained to perform tasks that arenât explicitly syntactic (image caption- ing and sentiment analysis). | 1611.01368#41 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01368 | 42 | Early work that used neural networks to model grammaticality judgments includes Allen and Sei- denberg (1999) and Lawrence et al. (1996). More re- cently, the connection between grammaticality judgments and the probabilities assigned by a language model was explored by Clark et al. (2013) and Lau et al. (2015). Finally, arguments for evaluating NLP models on a strategically sampled set of dependency types rather than a random sample of sentences have been made in the parsing literature (Rimell et al., 2009; Nivre et al., 2010; Bender et al., 2011).
# 9 Discussion and Future Work
Neural network architectures are typically evaluated on random samples of naturally occurring sentences, e.g., using perplexity on held-out data in language modeling. Since the majority of natural language sen- tence are grammatically simple, models can achieve high overall accuracy using ï¬awed heuristics that fail on harder cases. This makes it difï¬cult to distin- guish simple but robust sequence models from more expressive architectures (Socher, 2014; Grefenstette et al., 2015; Joulin and Mikolov, 2015). Our work suggests an alternative strategyâevaluation on natu- rally occurring sentences that are sampled based on their grammatical complexityâwhich can provide more nuanced tests of language models (Rimell et al., 2009; Bender et al., 2011). | 1611.01368#42 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01368 | 43 | This approach can be extended to the training stage: neural networks can be encouraged to develop more sophisticated generalizations by oversampling grammatically challenging training sentences. We took a ï¬rst step in this direction when we trained the network only on dependencies with intervening nouns (Section 6). This training regime indeed im- proved the performance of the network; however, the improvement was quantitative rather than qualitative: there was limited generalization to dependencies that were even more difï¬cult than those encountered in training. Further experiments are needed to establish the efï¬cacy of this method.
A network that has acquired syntactic represen- tations sophisticated enough to handle subject-verb agreement is likely to show improved performance on other structure-sensitive dependencies, including pronoun coreference, quantiï¬er scope and negative polarity items. As such, neural models used in NLP applications may beneï¬t from grammatically sophis- ticated sentence representations developed in a multi- task learning setup (Caruana, 1998), where the model is trained concurrently on the task of interest and on | 1611.01368#43 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01368 | 44 | one of the tasks we proposed in this paper. Of course, grammatical phenomena differ from each other in many ways. The distribution of negative polarity items is highly sensitive to semantic factors (Gian- nakidou, 2011). Restrictions on unbounded depen- dencies (Ross, 1967) may require richer syntactic representations than those required for subject-verb dependencies. The extent to which the results of our study will generalize to other constructions and other languages, then, is a matter for empirical research.
Humans occasionally make agreement attraction mistakes during language production (Bock and Miller, 1991) and comprehension (Nicol et al., 1997). These errors persist in human acceptability judg- ments (Tanner et al., 2014), which parallel our gram- maticality judgment task. Cases of grammatical agreement with the nearest rather than structurally rel- evant constituent have been documented in languages such as Slovenian (MaruËsiËc et al., 2007), and have even been argued to be occasionally grammatical in English (Zwicky, 2005). In future work, explor- ing the relationship between these cases and neural network predictions can shed light on the cognitive plausibility of those networks.
# 10 Conclusion | 1611.01368#44 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01368 | 45 | # 10 Conclusion
LSTMs are sequence models; they do not have built- in hierarchical representations. We have investigated how well they can learn subject-verb agreement, a phenomenon that crucially depends on hierarchical syntactic structure. When provided explicit supervi- sion, LSTMs were able to learn to perform the verb- number agreement task in most cases, although their error rate increased on particularly difï¬cult sentences. We conclude that LSTMs can learn to approximate structure-sensitive dependencies fairly well given ex- plicit supervision, but more expressive architectures may be necessary to eliminate errors altogether. Fi- nally, our results provide evidence that the language modeling objective is not by itself sufï¬cient for learn- ing structure-sensitive dependencies, and suggest that a joint training objective can be used to supplement language models on tasks for which syntax-sensitive dependencies are important.
# Acknowledgments | 1611.01368#45 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01368 | 46 | # Acknowledgments
We thank Marco Baroni, Grzegorz ChrupaÅa, Alexan- der Clark, Sol Lago, Paul Smolensky, Benjamin Spector and Roberto Zamparelli for comments and discussion. This research was supported by the European Research Council (grant ERC-2011-AdG 295810 BOOTPHON), the Agence Nationale pour la Recherche (grants ANR-10-IDEX-0001-02 PSL and ANR-10-LABX-0087 IEC) and the Israeli Science Foundation (grant number 1555/15).
# References
Joseph Allen and Mark S. Seidenberg. 1999. The emer- gence of grammaticality in connectionist networks. In Brian MacWhinney, editor, Emergentist approaches to language: Proceedings of the 28th Carnegie sym- posium on cognition, pages 115â151. Mahwah, NJ: Erlbaum.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In International Conference for Learning Representations. | 1611.01368#46 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01368 | 47 | Emily M. Bender, Dan Flickinger, Stephan Oepen, and Yi Zhang. 2011. Parser evaluation over local and non-local deep dependencies in a large corpus. In Pro- ceedings of EMNLP, pages 397â408.
Kathryn Bock and Carol A. Miller. 1991. Broken agree- ment. Cognitive Psychology, 23(1):45â93.
Melissa Bowerman. 1988. The âno negative evidenceâ problem: How do children avoid constructing an overly general grammar? In John A. Hawkins, editor, Explain- ing language universals, pages 73â101. Oxford: Basil Blackwell.
Samuel R. Bowman, Christopher D. Manning, and Christopher Potts. 2015. Tree-structured composi- tion in neural networks without tree-structured archi- tectures. In Proceedings of the NIPS Workshop on Cog- nitive Computation: Integrating Neural and Symbolic Approaches.
Bo Cartling. 2008. On the implicit acquisition of a context-free grammar by a simple recurrent neural net- work. Neurocomputing, 71(7):1527â1537.
Rich Caruana. 1998. Multitask learning. In Sebastian Thrun and Lorien Pratt, editors, Learning to learn, pages 95â133. Boston: Kluwer. | 1611.01368#47 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01368 | 48 | Rich Caruana. 1998. Multitask learning. In Sebastian Thrun and Lorien Pratt, editors, Learning to learn, pages 95â133. Boston: Kluwer.
Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robin- son. 2013. One billion word benchmark for measur- ing progress in statistical language modeling. arXiv preprint arXiv:1312.3005.
Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase repre- sentations using RNN encoderâdecoder for statistical machine translation. In Proceedings of EMNLP, pages 1724â1734.
Noam Chomsky. 1965. Aspects of the Theory of Syntax. Cambridge, MA: MIT press.
Alexander Clark, Gianluca Giorgolo, and Shalom Lap- pin. 2013. Statistical representation of grammaticality judgements: The limits of n-gram models. In Proceed- ings of the Fourth Annual Workshop on Cognitive Mod- eling and Computational Linguistics (CMCL), pages 28â36. | 1611.01368#48 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01368 | 49 | Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and A. Noah Smith. 2016. Recurrent neural network gram- mars. In Proceedings of NAACL/HLT, pages 199â209. Kathleen M. Eberhard, J. Cooper Cutting, and Kathryn Bock. 2005. Making syntax of sense: Number agree- ment in sentence production. Psychological Review, 112(3):531â559.
Jeffrey L. Elman. 1990. Finding structure in time. Cogni- tive Science, 14(2):179â211.
Jeffrey L. Elman. 1991. Distributed representations, sim- ple recurrent networks, and grammatical structure. Ma- chine Learning, 7(2-3):195â225.
Jeffrey L. Elman. 1993. Learning and development in neu- ral networks: The importance of starting small. Cogni- tion, 48(1):71â99.
Martin B. H. Everaert, Marinus A. C. Huybregts, Noam Chomsky, Robert C. Berwick, and Johan J. Bolhuis. 2015. Structures, not strings: Linguistics as part of the cognitive sciences. Trends in Cognitive Sciences, 19(12):729â743. | 1611.01368#49 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01368 | 50 | Robert Frank, Donald Mathis, and William Badecker. 2013. The acquisition of anaphora by simple recur- rent networks. Language Acquisition, 20(3):181â227. Felix Gers and J¨urgen Schmidhuber. 2001. LSTM re- current networks learn simple context-free and context- sensitive languages. IEEE Transactions on Neural Net- works, 12(6):1333â1340.
Anastasia Giannakidou. 2011. Negative and positive polarity items: Variation, licensing, and compositional- ity. In Claudia Maienborn, Klaus von Heusinger, and Paul Portner, editors, Semantics: An international hand- book of natural language meaning. Berlin: Mouton de Gruyter.
Yoav Goldberg and Joakim Nivre. 2012. A dynamic ora- cle for arc-eager dependency parsing. In Proceedings of COLING 2012, pages 959â976.
Edward Grefenstette, Karl Moritz Hermann, Mustafa Su- leyman, and Phil Blunsom. 2015. Learning to trans- duce with unbounded memory. In Advances in Neural Information Processing Systems, pages 1828â1836.
Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735â 1780. | 1611.01368#50 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01368 | 51 | Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735â 1780.
Rodney Huddleston and Geoffrey K. Pullum. 2002. The Cambridge Grammar of the English Language. Cam- bridge University Press, Cambridge.
Inferring algorithmic patterns with stack-augmented recurrent nets. In Advances in Neural Information Processing Systems, pages 190â198.
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Exploring arXiv preprint Shazeer, and Yonghui Wu. the limits of language modeling. arXiv:1602.02410. 2016.
´Akos K´ad´ar, Grzegorz ChrupaÅa, and Afra Alishahi. 2016. Representation of linguistic form and func- arXiv preprint tion in recurrent neural networks. arXiv:1602.08952.
Andrej Karpathy, Justin Johnson, and Fei-Fei Li. 2016. Visualizing and understanding recurrent networks. Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Confer- ence for Learning Representations. | 1611.01368#51 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01368 | 52 | Eliyahu Kiperwasser and Yoav Goldberg. 2016. Simple and accurate dependency parsing using bidirectional lstm feature representations. Transactions of the Asso- ciation of Computational Linguistics, 4:313â327.
Jey Han Lau, Alexander Clark, and Shalom Lappin. 2015. Unsupervised prediction of acceptability judgements. In Proceedings of ACL/IJCNLP, pages 1618â1628. Steve Lawrence, Lee C. Giles, and Santliway Fong. 1996. Can recurrent neural networks learn natural language grammars? In IEEE International Conference on Neu- ral Networks, volume 4, pages 1853â1858.
Willem J. M. Levelt, Ardi Roelofs, and Antje S. Meyer. 1999. A theory of lexical access in speech production. Behavioral and Brain Sciences, 22(1):1â75.
Jiwei Li, Xinlei Chen, Eduard H. Hovy, and Dan Jurafsky. 2016. Visualizing and understanding neural models in NLP. In Proceedings of NAACL-HLT 2016, pages 681â691. | 1611.01368#52 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01368 | 53 | Franc MaruËsiËc, Andrew Nevins, and Amanda Saksida. 2007. Last-conjunct agreement in Slovenian. In An- nual Workshop on Formal Approaches to Slavic Lin- guistics, pages 210â227.
Tomas Mikolov, Martin Karaï¬Â´at, Lukas Burget, Jan Cer- nock`y, and Sanjeev Khudanpur. 2010. Recurrent neu- ral network based language model. In INTERSPEECH, pages 1045â1048.
Janet L. Nicol, Kenneth I. Forster, and Csaba Veres. 1997. Subjectâverb agreement processes in comprehension. Journal of Memory and Language, 36(4):569â587.
Joakim Nivre, Laura Rimell, Ryan McDonald, and Carlos Gomez-Rodriguez. 2010. Evaluation of dependency parsers on unbounded dependencies. In Proceedings of the 23rd International Conference on Computational Linguistics, pages 833â841. Association for Computa- tional Linguistics.
Laura Rimell, Stephen Clark, and Mark Steedman. 2009. Unbounded dependency recovery for parser evaluation. In Proceedings of EMNLP, pages 813â821. | 1611.01368#53 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01368 | 54 | Laura Rimell, Stephen Clark, and Mark Steedman. 2009. Unbounded dependency recovery for parser evaluation. In Proceedings of EMNLP, pages 813â821.
Paul Rodriguez, Janet Wiles, and Jeffrey L. Elman. 1999. A recurrent neural network that learns to count. Con- nection Science, 11(1):5â40.
Paul Rodriguez. 2001. Simple recurrent networks learn context-free and context-sensitive languages by count- ing. Neural Computation, 13(9):2093â2118.
Douglas L. T. Rohde and David C. Plaut. 1999. Language acquisition in the absence of explicit negative evidence: How important is starting small? Cognition, 72(1):67â 109.
John Robert Ross. 1967. Constraints on variables in syntax. Ph.D. thesis, MIT.
Carson T. Sch¨utze. 1996. The empirical base of linguis- tics: Grammaticality judgments and linguistic method- ology. Chicago, IL: University of Chicago Press. Richard Socher. 2014. Recursive Deep Learning for Natural Language Processing and Computer Vision. Ph.D. thesis, Stanford University.
Adrian Staub. 2009. On the interpretation of the number attraction effect: Response time evidence. Journal of Memory and Language, 60(2):308â327. | 1611.01368#54 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01368 | 55 | Adrian Staub. 2009. On the interpretation of the number attraction effect: Response time evidence. Journal of Memory and Language, 60(2):308â327.
Martin Sundermeyer, Ralf Schl¨uter, and Hermann Ney. 2012. LSTM neural networks for language modeling. In INTERSPEECH.
Darren Tanner, Janet Nicol, and Laurel Brehm. 2014. The time-course of feature interference in agreement com- prehension: Multiple mechanisms and asymmetrical attraction. Journal of Memory and Language, 76:195â 215.
Oriol Vinyals, Åukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015. Grammar as a foreign language. In Advances in Neural Information Processing Systems, pages 2755â2763.
Arnold Zwicky. 2005. Agreement with nearest always http://itre.cis.upenn.edu/Ëmyl/ bad? languagelog/archives/001846.html.
# Unit 2: PP
Unit 0: PP,
# Unit 0: RC
# Unit 1: PP
# Unit 1: RC
# Unit 2: RC
# Unit3:PP | 1611.01368#55 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01368 | 56 | 10 10 y 0.2 s,Â¥s 0.2 ss _ , SY Oa 0.4 0.2 0.2 0.0 0.0 05 05 02 0.2 0.0 Y 0.0 Y. 0.2 0.2 ve 1Y 00 âYs 0.0 Ys 0.2 SY 92 Bey 4 A 0.0 Ys 0.0 02 sÂ¥si0-2 YY -0.4 04 38 ve 38 05 08 Ary 24 ysis 38 -10 YY =10 vou ys" OB 08 â08 SYS-08 s.Â¥s FPF SCOP CELSO CPF CL fees SPF ECP COP EFEEO CLF ECO CPSELLM Unit 4: PP Unit 4: RC UnitS: PP. it5:RC yoy. Unit 6: PP Unit6: RC yoy. Unit 7: PP Unit 7: RC â , Ys 0.8 0.8 : 02 y y 34 0s cy °° 0s SYS 0.8 YS os os i ay Lf S=q XY 0.0 "0.0 SY 90 sy 0.0 04 04 ws AA v9.4 Y s,Y Ys 0.5 y 05 YY 05 05 gy 0.2 YY 0.2 ca wees SA âYS YS 40 YS a9 00 VSS | 1611.01368#56 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01368 | 57 | Y s,Y Ys 0.5 y 05 YY 05 05 gy 0.2 YY 0.2 ca wees SA âYS YS 40 YS a9 00 VSS 0.0 1S ee 4 s a) we ) se we a we 4 s we w mC) Ra w a we a) s a) we @ ee w a) we A) S a) we rt) Ka we ro Unit 8: PP Unit 8: RC Unit 9: PP Unit 9: RC Unit 10: PP Unit 10: RC Unit 11: PP Unit 11: RC s,Ys s,Ys 0.0 YS 0.0 YS 1.0 sy 10 s.YS o2 0.2 WON 0.8 0.8 ~02 ayvs-0.2 os SYS os ve Et hav 99 SONG et oe v3 a ay. 5 bo Â¥ 00 0.4 0.4 5 04 0.4 ~06 0.6 â , ooâ 0.6 0.6 0.2 0.2 ~038 SY 0.8 , os 05 Ss 8 VE -08 0.0 Y oo WY 10 -1.0 Y 10 -1.0 Y > ¢ Â¥ 2 & & FO F Cw FO F FO e O F CO eeeee FO Fo CO FO F SO Unit 12:PP Unit 12: RC y Unit 13: PP YS Unit | 1611.01368#57 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01368 | 58 | Â¥ 2 & & FO F Cw FO F FO e O F CO eeeee FO Fo CO FO F SO Unit 12:PP Unit 12: RC y Unit 13: PP YS Unit 13: RC Unit 14: PP Unit 14: RC Unit 15: PP Unit 15: RC s, s, : 04 0.4 0.4 0.4 Ys 0.0 0.0 YS° 0.0 0.0 Ys -02 SY go 0.0 0.0 Y 05 YS os -0.2 vy. 22 ~04 y OA We0.5 Ys os SY âa0 Y 40 Y a4 Sg4 ~06 Ys 0.6 ey Ys FOF SP CP FEO CPF ESO CPF ECOP CPF CPO KC OFF O CPF CMH CP FEL Unit 16: PP Unit 16: RC Unit 17: PP Unit 17: RC vs Unit18:PP yo... Unit 18: RC Unit 19: PP Unit 19: RC a8 a8 sys 88 oa y 08 nv O4 SYS 04 SY pn" v0.6 0.6 .Ys oy 0.0 oo 93 SY 93 sy 02 YS âoa SYS 9 4 0.4 0.2 0.2 -06 0.6 5 S,YS-0.4 âys 04 08 YY 08 sy 0.2 ye 0.2 sy eeiee. | 1611.01368#58 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01368 | 59 | 0.2 0.2 -06 0.6 5 S,YS-0.4 âys 04 08 YY 08 sy 0.2 ye 0.2 sy eeiee. COPECO CPF ECO CPSP CLF ECO ECP SEO CPF SCO CPE M Unit 20: PP Unit 20: RC Unit 21: PP Unit 21: RC Unit 22: PP Unit 22: RC Unit 23: PP Unit 23: RC xsY 08 08 SYS9,10 0.10 Ys 0.2 0.2 0.2 0.2 06 0.6 0.05 0.05 0.0 0.0 sy 0.1 a O21 y%,, 04 0.4 0.00 sifsao0 Ys 0.2 s,Â¥s-0.2 sys ° oe Bis 02 ow 82 60-08 07-005 4 Ys 0.4 Ys 0.2 BÂ¥5-0.2 â2 eps 02 0.15 YY ~0.15 FPF SQ CP SEO CPF EO CSCO CPF ECO EC PSFO CLE CO CPS EL Unit 24: PP Unit24:RC Unit 25: PP Unit 25: RC Unit 26: PP Unit 26: RC Unit 27: PP Unit 27: RC YY 1.0 yy (10 NY ya 97 Y 97 05 Y 05 0s ys 05 Ys | 1611.01368#59 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01368 | 60 | 25: RC Unit 26: PP Unit 26: RC Unit 27: PP Unit 27: RC YY 1.0 yy (10 NY ya 97 Y 97 05 Y 05 0s ys 05 Ys 02 YS go y 08 05 0.0 Ys 0.0 Ys 0.0 s.¥s 0.0 SYS 00 0.0 Ys 93 $3 Y SYS 7 05 05 sYS95 SY 98 ~02 02 Sys 92 WS 02 Zi -1.0 SY 40 SY 04 ¥s04 0.0 Xs,YS 0.0 si¥s FPF CO CPE ECO ECPI ECO CPE CO CPF ECM CPS EO CLE SO CP ELL Unit 28: PP Unit 28: RC Unit 29: PP Unit 29: RC Unit 30: PP Unit 30: RC Unit 31: PP Unit 31: RC 0.0 S.YS 0.0 s\Ys 10 s,Ys 1.0 ss 9.9 $9 Ys 05 0.5 -0.2 SY 9.2 MS os SY 05 sy 02 Ys 02 0.0 Y oo YY 04 4 ; " 33 s,¥s0-3 Sys 05 SYS 95 i Oe Os v i Y: 38 * 38 oY 5 | A YS . . S¥s-10 na -1.0 0S "Ys os 0.7 s¥ | 1611.01368#60 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01368 | 61 | Os v i Y: 38 * 38 oY 5 | A YS . . SÂ¥s-10 na -1.0 0S "Ys os 0.7 sÂ¥ 0.7 SPF SCOP CLEFECO CLI ECO CP ECO CPF CMH CPF CLI SO CLELL Unit 32: PP Unit 32: RC Unit 33: PP Unit 33: RC Unit 34: PP Unit 34: RC Unit 35: PP Unit 35:RC , YY og YY 08 aa s Sy i sv. Oe sY 05 05 06 0.6 02 ys 0? SYS ba Sys ba s.Ys SY 94 Ys 0.4 oa âS° 9.0 0.0 Ys 0.0 2 ; a3 ee te a wed ene 2 0.1 01 08 SÂ¥s05 ey Fs-02 SYS 33 04 y 24 y 0.0 0.0 -1.0 -1.0 â SPF ECP CP FEO CPF ECP CPF ECP CPF ESO COFSLP CLF LSM OF FO Unit 36: PP Unit 36: RC Unit 37: PP Unit 37: RC Unit 38: PP Unit 38: RC Unit 39: PP Unit 39: RC 03 YS 93 ys vs 10 Ys 10 1.0 1.0 92 Sys 93 Sys â Y oo 0s âY 05 05 y 05 Y of | 1611.01368#61 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01368 | 62 | PP Unit 39: RC 03 YS 93 ys vs 10 Ys 10 1.0 1.0 92 Sys 93 Sys â Y oo 0s âY 05 05 y 05 Y of sy $9 SY 90 SYS gg See 00 oY 0.0 O< 0.0 S00 s,Y 0.2 02 : : 05 05 SYS 03 0.3 02 sy 0? ay SYS yes? 8 VPs we 4¢ $ we wv A) & vw a) wv rn) $ we wv << w aw w 4 $ Â¥ ro) we 4 & wv > vw nC) s ro) vw © se we a) Unit 40: PP Unit 40: RC Unit 41: PP Unit 41: RC Unit42:PP Unit 42: RC Unit 43: PP Unit 43: RC 04 YS o4 05 s,Ys 0.5 05 SY 05 SY Â¥s 02 Y 92 y 03 y 3 sys 04 0.4 05 sÂ¥s 0.5 0.0 y 20 0.2 *" 92 03 0.3 Ss, XY 0.2 SY 0.2 a 0.2 0.1 y 0.2 yy 0.0 0.0 OA s,Â¥s-0.4 0.0 YY 00 Â¥ ° oe a1 05 Y os 0.6 0.6 Ys 0-1 Ys 2 oa nts | 1611.01368#62 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01368 | 63 | s,Â¥s-0.4 0.0 YY 00 Â¥ ° oe a1 05 Y os 0.6 0.6 Ys 0-1 Ys 2 oa nts 0 ons? ye # OS FO FW SF CO FO F CW FWD SF FO FO F CO FO SF FO FW o SO FO SF FO Unit 44: PP Unit 44: RC Unit 45: PP Unit 45: RC Unit46:PP Unit 46: RC Unit 47: PP Unit 47: RC 1.0 Y 10 .Y .Y Â¥ SY 10 1.0 .Y 08 Ys 0.8 0.8 0.8 Bry 0s 0s s,s VS iG oe oe spb 0.6 0.6 Ys Ys 0.5 sy 05 ~ 33 33 02 oa os ae 28 ayaa ss , . Ys 04 05 05 ay hay âYS 00 s.Ys 0.0 BÂ¥s-10 SY 1.0 yy 08 os we rr) $ we wv A) & vw a) wv rn) $ we wv << w aw w 4 $ Â¥ ro) we 4 & wv > vw nC) s ro) vw © se we a) Unit 48: PP Unit 48: RC Unit 49: PP yoy. Unit 49: RC 0.4 0.4 0.0 5 0.0 e\Â¥s | 1611.01368#63 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The success of long short-term memory (LSTM) neural networks in language
processing is typically attributed to their ability to capture long-distance
statistical regularities. Linguistic regularities are often sensitive to
syntactic structure; can such dependencies be captured by LSTMs, which do not
have explicit structural representations? We begin addressing this question
using number agreement in English subject-verb dependencies. We probe the
architecture's grammatical competence both using training objectives with an
explicit grammatical target (number prediction, grammaticality judgments) and
using language models. In the strongly supervised settings, the LSTM achieved
very high overall accuracy (less than 1% errors), but errors increased when
sequential and structural information conflicted. The frequency of such errors
rose sharply in the language-modeling setting. We conclude that LSTMs can
capture a non-trivial amount of grammatical structure given targeted
supervision, but stronger architectures may be required to further reduce
errors; furthermore, the language modeling signal is insufficient for capturing
syntax-sensitive dependencies, and should be supplemented with more direct
supervision if such dependencies need to be captured. | http://arxiv.org/pdf/1611.01368 | Tal Linzen, Emmanuel Dupoux, Yoav Goldberg | cs.CL | 15 pages; to appear in Transactions of the Association for
Computational Linguistics | null | cs.CL | 20161104 | 20161104 | [
{
"id": "1602.08952"
},
{
"id": "1602.02410"
}
] |
1611.01144 | 0 | 7 1 0 2
g u A 5 ] L M . t a t s [ 5 v 4 4 1 1 0 . 1 1 6 1 : v i X r a
Published as a conference paper at ICLR 2017
# CATEGORICAL REPARAMETERIZATION WITH GUMBEL-SOFTMAX
Eric Jang Google Brain [email protected]
Shixiang Guâ University of Cambridge MPI T¨ubingen [email protected]
Ben Pooleâ Stanford University [email protected]
# ABSTRACT
Categorical variables are a natural choice for representing discrete structure in the world. However, stochastic neural networks rarely use categorical latent variables due to the inability to backpropagate through samples. In this work, we present an efï¬cient gradient estimator that replaces the non-differentiable sample from a cat- egorical distribution with a differentiable sample from a novel Gumbel-Softmax distribution. This distribution has the essential property that it can be smoothly annealed into a categorical distribution. We show that our Gumbel-Softmax esti- mator outperforms state-of-the-art gradient estimators on structured output predic- tion and unsupervised generative modeling tasks with categorical latent variables, and enables large speedups on semi-supervised classiï¬cation. | 1611.01144#0 | Categorical Reparameterization with Gumbel-Softmax | Categorical variables are a natural choice for representing discrete
structure in the world. However, stochastic neural networks rarely use
categorical latent variables due to the inability to backpropagate through
samples. In this work, we present an efficient gradient estimator that replaces
the non-differentiable sample from a categorical distribution with a
differentiable sample from a novel Gumbel-Softmax distribution. This
distribution has the essential property that it can be smoothly annealed into a
categorical distribution. We show that our Gumbel-Softmax estimator outperforms
state-of-the-art gradient estimators on structured output prediction and
unsupervised generative modeling tasks with categorical latent variables, and
enables large speedups on semi-supervised classification. | http://arxiv.org/pdf/1611.01144 | Eric Jang, Shixiang Gu, Ben Poole | stat.ML, cs.LG | null | null | stat.ML | 20161103 | 20170805 | [
{
"id": "1602.06725"
},
{
"id": "1512.00567"
},
{
"id": "1609.01704"
}
] |
1611.01211 | 0 | 8 1 0 2
r a M 3 1 ] G L . s c [
8 v 1 1 2 1 0 . 1 1 6 1 : v i X r a
# Combating Reinforcement Learningâs Sisyphean Curse with Intrinsic Fear
Zachary C. Lipton1,2,3, Kamyar Azizzadenesheli4, Abhishek Kumar3, Lihong Li5, Jianfeng Gao6, Li Deng7
Carnegie Mellon University1, Amazon AI2, University of California, San Diego3, Univerisity of California, Irvine4, Google5, Microsoft Research6, Citadel7 [email protected], [email protected], [email protected] { lihongli, jfgao, deng } @microsoft.com
# March 1, 2022
# Abstract | 1611.01211#0 | Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear | Many practical environments contain catastrophic states that an optimal agent
would visit infrequently or never. Even on toy problems, Deep Reinforcement
Learning (DRL) agents tend to periodically revisit these states upon forgetting
their existence under a new policy. We introduce intrinsic fear (IF), a learned
reward shaping that guards DRL agents against periodic catastrophes. IF agents
possess a fear model trained to predict the probability of imminent
catastrophe. This score is then used to penalize the Q-learning objective. Our
theoretical analysis bounds the reduction in average return due to learning on
the perturbed objective. We also prove robustness to classification errors. As
a bonus, IF models tend to learn faster, owing to reward shaping. Experiments
demonstrate that intrinsic-fear DQNs solve otherwise pathological environments
and improve on several Atari games. | http://arxiv.org/pdf/1611.01211 | Zachary C. Lipton, Kamyar Azizzadenesheli, Abhishek Kumar, Lihong Li, Jianfeng Gao, Li Deng | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20161103 | 20180313 | [
{
"id": "1802.04412"
}
] |
1611.01224 | 0 | 7 1 0 2
l u J 0 1 ] G L . s c [
2 v 4 2 2 1 0 . 1 1 6 1 : v i X r a
Published as a conference paper at ICLR 2017
# SAMPLE EFFICIENT ACTOR-CRITIC WITH EXPERIENCE REPLAY
Ziyu Wang DeepMind [email protected]
Victor Bapst DeepMind [email protected]
Nicolas Heess DeepMind [email protected]
Volodymyr Mnih DeepMind [email protected]
Remi Munos DeepMind [email protected]
Koray Kavukcuoglu DeepMind [email protected]
Nando de Freitas DeepMind, CIFAR, Oxford University [email protected]
# ABSTRACT
This paper presents an actor-critic deep reinforcement learning agent with ex- perience replay that is stable, sample efï¬cient, and performs remarkably well on challenging environments, including the discrete 57-game Atari domain and several continuous control problems. To achieve this, the paper introduces several inno- vations, including truncated importance sampling with bias correction, stochastic dueling network architectures, and a new trust region policy optimization method.
# INTRODUCTION | 1611.01224#0 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01144 | 1 | # INTRODUCTION
Stochastic neural networks with discrete random variables are a powerful technique for representing distributions encountered in unsupervised learning, language modeling, attention mechanisms, and reinforcement learning domains. For example, discrete variables have been used to learn probabilis- tic latent representations that correspond to distinct semantic classes (Kingma et al., 2014), image regions (Xu et al., 2015), and memory locations (Graves et al., 2014; Graves et al., 2016). Discrete representations are often more interpretable (Chen et al., 2016) and more computationally efï¬cient (Rae et al., 2016) than their continuous analogues. | 1611.01144#1 | Categorical Reparameterization with Gumbel-Softmax | Categorical variables are a natural choice for representing discrete
structure in the world. However, stochastic neural networks rarely use
categorical latent variables due to the inability to backpropagate through
samples. In this work, we present an efficient gradient estimator that replaces
the non-differentiable sample from a categorical distribution with a
differentiable sample from a novel Gumbel-Softmax distribution. This
distribution has the essential property that it can be smoothly annealed into a
categorical distribution. We show that our Gumbel-Softmax estimator outperforms
state-of-the-art gradient estimators on structured output prediction and
unsupervised generative modeling tasks with categorical latent variables, and
enables large speedups on semi-supervised classification. | http://arxiv.org/pdf/1611.01144 | Eric Jang, Shixiang Gu, Ben Poole | stat.ML, cs.LG | null | null | stat.ML | 20161103 | 20170805 | [
{
"id": "1602.06725"
},
{
"id": "1512.00567"
},
{
"id": "1609.01704"
}
] |
1611.01211 | 1 | # March 1, 2022
# Abstract
Many practical environments contain catastrophic states that an optimal agent would visit infrequently or never. Even on toy problems, Deep Reinforcement Learning (DRL) agents tend to periodically revisit these states upon forgetting their existence under a new policy. We introduce intrinsic fear (IF), a learned reward shaping that guards DRL agents against periodic catastrophes. IF agents possess a fear model trained to predict the probability of imminent catastrophe. This score is then used to penalize the Q- learning objective. Our theoretical analysis bounds the reduction in average return due to learning on the perturbed objective. We also prove robustness to classification errors. As a bonus, IF models tend to learn faster, owing to reward shaping. Experiments demonstrate that intrinsic-fear DQNs solve otherwise pathological environments and improve on several Atari games.
# Introduction | 1611.01211#1 | Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear | Many practical environments contain catastrophic states that an optimal agent
would visit infrequently or never. Even on toy problems, Deep Reinforcement
Learning (DRL) agents tend to periodically revisit these states upon forgetting
their existence under a new policy. We introduce intrinsic fear (IF), a learned
reward shaping that guards DRL agents against periodic catastrophes. IF agents
possess a fear model trained to predict the probability of imminent
catastrophe. This score is then used to penalize the Q-learning objective. Our
theoretical analysis bounds the reduction in average return due to learning on
the perturbed objective. We also prove robustness to classification errors. As
a bonus, IF models tend to learn faster, owing to reward shaping. Experiments
demonstrate that intrinsic-fear DQNs solve otherwise pathological environments
and improve on several Atari games. | http://arxiv.org/pdf/1611.01211 | Zachary C. Lipton, Kamyar Azizzadenesheli, Abhishek Kumar, Lihong Li, Jianfeng Gao, Li Deng | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20161103 | 20180313 | [
{
"id": "1802.04412"
}
] |
1611.01224 | 1 | # INTRODUCTION
Realistic simulated environments, where agents can be trained to learn a large repertoire of cognitive skills, are at the core of recent breakthroughs in AI (Bellemare et al., 2013; Mnih et al., 2015; Schulman et al., 2015a; Narasimhan et al., 2015; Mnih et al., 2016; Brockman et al., 2016; Oh et al., 2016). With richer realistic environments, the capabilities of our agents have increased and improved. Unfortunately, these advances have been accompanied by a substantial increase in the cost of simulation. In particular, every time an agent acts upon the environment, an expensive simulation step is conducted. Thus to reduce the cost of simulation, we need to reduce the number of simulation steps (i.e. samples of the environment). This need for sample efï¬ciency is even more compelling when agents are deployed in the real world. | 1611.01224#1 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01144 | 2 | However, stochastic networks with discrete variables are difï¬cult to train because the backprop- agation algorithm â while permitting efï¬cient computation of parameter gradients â cannot be applied to non-differentiable layers. Prior work on stochastic gradient estimation has traditionally focused on either score function estimators augmented with Monte Carlo variance reduction tech- niques (Paisley et al., 2012; Mnih & Gregor, 2014; Gu et al., 2016; Gregor et al., 2013), or biased path derivative estimators for Bernoulli variables (Bengio et al., 2013). However, no existing gra- dient estimator has been formulated speciï¬cally for categorical variables. The contributions of this work are threefold:
1. We introduce Gumbel-Softmax, a continuous distribution on the simplex that can approx- imate categorical samples, and whose parameter gradients can be easily computed via the reparameterization trick.
2. We show experimentally that Gumbel-Softmax outperforms all single-sample gradient es- timators on both Bernoulli variables and categorical variables.
3. We show that this estimator can be used to efï¬ciently train semi-supervised models (e.g. Kingma et al. (2014)) without costly marginalization over unobserved categorical latent variables. | 1611.01144#2 | Categorical Reparameterization with Gumbel-Softmax | Categorical variables are a natural choice for representing discrete
structure in the world. However, stochastic neural networks rarely use
categorical latent variables due to the inability to backpropagate through
samples. In this work, we present an efficient gradient estimator that replaces
the non-differentiable sample from a categorical distribution with a
differentiable sample from a novel Gumbel-Softmax distribution. This
distribution has the essential property that it can be smoothly annealed into a
categorical distribution. We show that our Gumbel-Softmax estimator outperforms
state-of-the-art gradient estimators on structured output prediction and
unsupervised generative modeling tasks with categorical latent variables, and
enables large speedups on semi-supervised classification. | http://arxiv.org/pdf/1611.01144 | Eric Jang, Shixiang Gu, Ben Poole | stat.ML, cs.LG | null | null | stat.ML | 20161103 | 20170805 | [
{
"id": "1602.06725"
},
{
"id": "1512.00567"
},
{
"id": "1609.01704"
}
] |
1611.01211 | 2 | # Introduction
Following the success of deep reinforcement learning (DRL) on Atari games [22] and the board game of Go [29], researchers are increasingly exploring practical applications. Some investigated applications include robotics [17], dialogue systems [9, 19], energy management [25], and self-driving cars [27]. Amid this push to apply DRL, we might ask, can we trust these agents in the wild? Agents acting society may cause harm. A self-driving car might hit pedestrians and a domestic robot might injure a child. Agents might also cause self-injury, and while Atari lives lost are inconsequential, robots are expensive.
Unfortunately, it may not be feasible to prevent all catastrophes without requiring extensive prior knowledge [10]. Moreover, for typical DQNs, providing large negative rewards does not solve the problem: as soon as the catastrophic trajectories are flushed from the replay buffer, the updated Q-function ceases to discourage revisiting these states.
In this paper, we define avoidable catastrophes as states that prior knowledge dictates an optimal policy should visit rarely or never. Additionally, we define danger statesâthose from which a catastrophic state can
1 | 1611.01211#2 | Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear | Many practical environments contain catastrophic states that an optimal agent
would visit infrequently or never. Even on toy problems, Deep Reinforcement
Learning (DRL) agents tend to periodically revisit these states upon forgetting
their existence under a new policy. We introduce intrinsic fear (IF), a learned
reward shaping that guards DRL agents against periodic catastrophes. IF agents
possess a fear model trained to predict the probability of imminent
catastrophe. This score is then used to penalize the Q-learning objective. Our
theoretical analysis bounds the reduction in average return due to learning on
the perturbed objective. We also prove robustness to classification errors. As
a bonus, IF models tend to learn faster, owing to reward shaping. Experiments
demonstrate that intrinsic-fear DQNs solve otherwise pathological environments
and improve on several Atari games. | http://arxiv.org/pdf/1611.01211 | Zachary C. Lipton, Kamyar Azizzadenesheli, Abhishek Kumar, Lihong Li, Jianfeng Gao, Li Deng | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20161103 | 20180313 | [
{
"id": "1802.04412"
}
] |
1611.01224 | 2 | Experience replay (Lin, 1992) has gained popularity in deep Q-learning (Mnih et al., 2015; Schaul et al., 2016; Wang et al., 2016; Narasimhan et al., 2015), where it is often motivated as a technique for reducing sample correlation. Replay is actually a valuable tool for improving sample efï¬ciency and, as we will see in our experiments, state-of-the-art deep Q-learning methods (Schaul et al., 2016; Wang et al., 2016) have been up to this point the most sample efï¬cient techniques on Atari by a signiï¬cant margin. However, we need to do better than deep Q-learning, because it has two important limitations. First, the deterministic nature of the optimal policy limits its use in adversarial domains. Second, ï¬nding the greedy action with respect to the Q function is costly for large action spaces. | 1611.01224#2 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01144 | 3 | The practical outcome of this paper is a simple, differentiable approximate sampling mechanism for categorical variables that can be integrated into neural networks and trained using standard back- propagation.
âWork done during an internship at Google Brain.
1
Published as a conference paper at ICLR 2017
2 THE GUMBEL-SOFTMAX DISTRIBUTION
We begin by deï¬ning the Gumbel-Softmax distribution, a continuous distribution over the simplex that can approximate samples from a categorical distribution. Let z be a categorical variable with class probabilities Ï1, Ï2, ...Ïk. For the remainder of this paper we assume categorical samples are encoded as k-dimensional one-hot vectors lying on the corners of the (k â 1)-dimensional simplex, âkâ1. This allows us to deï¬ne quantities such as the element-wise mean Ep[z] = [Ï1, ..., Ïk] of these vectors.
The Gumbel-Max trick (Gumbel, 1954; Maddison et al., 2014) provides a simple and efï¬cient way to draw samples z from a categorical distribution with class probabilities Ï:
z = one_hot arg max i [gi + log Ïi] (1) | 1611.01144#3 | Categorical Reparameterization with Gumbel-Softmax | Categorical variables are a natural choice for representing discrete
structure in the world. However, stochastic neural networks rarely use
categorical latent variables due to the inability to backpropagate through
samples. In this work, we present an efficient gradient estimator that replaces
the non-differentiable sample from a categorical distribution with a
differentiable sample from a novel Gumbel-Softmax distribution. This
distribution has the essential property that it can be smoothly annealed into a
categorical distribution. We show that our Gumbel-Softmax estimator outperforms
state-of-the-art gradient estimators on structured output prediction and
unsupervised generative modeling tasks with categorical latent variables, and
enables large speedups on semi-supervised classification. | http://arxiv.org/pdf/1611.01144 | Eric Jang, Shixiang Gu, Ben Poole | stat.ML, cs.LG | null | null | stat.ML | 20161103 | 20170805 | [
{
"id": "1602.06725"
},
{
"id": "1512.00567"
},
{
"id": "1609.01704"
}
] |
1611.01211 | 3 | 1
be reached in a small number of steps, and assume that the optimal policy does visit the danger states rarely or never. The notion of a danger state might seem odd absent any assumptions about the transition function. With a fully-connected transition matrix, all states are danger states. However, physical environments are not fully connected. A car cannot be parked this second, underwater one second later.
This work primarily addresses how we might prevent DRL agents from perpetually making the same mistakes. As a bonus, we show that the prior knowledge knowledge that catastrophic states should be avoided accelerates learning. Our experiments show that even on simple toy problems, the classic deep Q-network (DQN) algorithm fails badly, repeatedly visiting catastrophic states so long as they continue to learn. This poses a formidable obstacle to using DQNs in the real world. How can we trust a DRL-based agent that was doomed to periodically experience catastrophes, just to remember that they exist? Imagine a self-driving car that had to periodically hit a few pedestrians to remember that it is undesirable. | 1611.01211#3 | Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear | Many practical environments contain catastrophic states that an optimal agent
would visit infrequently or never. Even on toy problems, Deep Reinforcement
Learning (DRL) agents tend to periodically revisit these states upon forgetting
their existence under a new policy. We introduce intrinsic fear (IF), a learned
reward shaping that guards DRL agents against periodic catastrophes. IF agents
possess a fear model trained to predict the probability of imminent
catastrophe. This score is then used to penalize the Q-learning objective. Our
theoretical analysis bounds the reduction in average return due to learning on
the perturbed objective. We also prove robustness to classification errors. As
a bonus, IF models tend to learn faster, owing to reward shaping. Experiments
demonstrate that intrinsic-fear DQNs solve otherwise pathological environments
and improve on several Atari games. | http://arxiv.org/pdf/1611.01211 | Zachary C. Lipton, Kamyar Azizzadenesheli, Abhishek Kumar, Lihong Li, Jianfeng Gao, Li Deng | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20161103 | 20180313 | [
{
"id": "1802.04412"
}
] |
1611.01224 | 3 | Policy gradient methods have been at the heart of signiï¬cant advances in AI and robotics (Silver et al., 2014; Lillicrap et al., 2015; Silver et al., 2016; Levine et al., 2015; Mnih et al., 2016; Schulman et al., 2015a; Heess et al., 2015). Many of these methods are restricted to continuous domains or to very speciï¬c tasks such as playing Go. The existing variants applicable to both continuous and discrete domains, such as the on-policy asynchronous advantage actor critic (A3C) of Mnih et al. (2016), are sample inefï¬cient.
The design of stable, sample efï¬cient actor critic methods that apply to both continuous and discrete action spaces has been a long-standing hurdle of reinforcement learning (RL). We believe this paper
1
Published as a conference paper at ICLR 2017
is the ï¬rst to address this challenge successfully at scale. More speciï¬cally, we introduce an actor critic with experience replay (ACER) that nearly matches the state-of-the-art performance of deep Q-networks with prioritized replay on Atari, and substantially outperforms A3C in terms of sample efï¬ciency on both Atari and continuous control domains. | 1611.01224#3 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01144 | 4 | z = one_hot arg max i [gi + log Ïi] (1)
where g1...gk are i.i.d samples drawn from Gumbel(0, 1)1. We use the softmax function as a continu- ous, differentiable approximation to arg max, and generate k-dimensional sample vectors y â âkâ1 where
exp((log(m) + 9:)/7) Yi E fori = 1,... (2) Yj-1 exp((log(7j) + 9;)/T)
The density of the Gumbel-Softmax distribution (derived in Appendix B) is:
k ok y Prr(Yis--s Ya) = E(k) (> nist) Tl) 3) i=l i=l
This distribution was independently discovered by Maddison et al. (2016), where it is referred to as the concrete distribution. As the softmax temperature Ï approaches 0, samples from the Gumbel- Softmax distribution become one-hot and the Gumbel-Softmax distribution becomes identical to the categorical distribution p(z).
a) 5 Categorical 7T=1.0 = 10.0 i a a la a __. b) i | | L | L L â_ category | 1611.01144#4 | Categorical Reparameterization with Gumbel-Softmax | Categorical variables are a natural choice for representing discrete
structure in the world. However, stochastic neural networks rarely use
categorical latent variables due to the inability to backpropagate through
samples. In this work, we present an efficient gradient estimator that replaces
the non-differentiable sample from a categorical distribution with a
differentiable sample from a novel Gumbel-Softmax distribution. This
distribution has the essential property that it can be smoothly annealed into a
categorical distribution. We show that our Gumbel-Softmax estimator outperforms
state-of-the-art gradient estimators on structured output prediction and
unsupervised generative modeling tasks with categorical latent variables, and
enables large speedups on semi-supervised classification. | http://arxiv.org/pdf/1611.01144 | Eric Jang, Shixiang Gu, Ben Poole | stat.ML, cs.LG | null | null | stat.ML | 20161103 | 20170805 | [
{
"id": "1602.06725"
},
{
"id": "1512.00567"
},
{
"id": "1609.01704"
}
] |
1611.01211 | 4 | In the tabular setting, an RL agent never forgets the learned dynamics of its environment, even as its policy evolves. Moreover, when the Markovian assumption holds, convergence to a globally optimal policy is guaranteed. However, the tabular approach becomes infeasible in high-dimensional, continuous state spaces. The trouble for DQNs owes to the use of function approximation [24]. When training a DQN, we successively update a neural network based on experiences. These experiences might be sampled in an online fashion, from a trailing window (experience replay buffer), or uniformly from all past experiences. Regardless of which mode we use to train the network, eventually, states that a learned policy never encounters will come to form an infinitesimally small region of the training distribution. At such times, our networks suffer the well-known problem of catastrophic forgetting [21, 20]. Nothing prevents the DQNâs policy from drifting back towards one that revisits forgotten catastrophic mistakes.
We illustrate the brittleness of modern DRL algorithms with a simple pathological problem called Adventure Seeker. This problem consists of a one-dimensional continuous state, two actions, simple dynamics, and admits an analytic solution. Nevertheless, the DQN fails. We then show that similar dynamics exist in the classic RL environment Cart-Pole. | 1611.01211#4 | Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear | Many practical environments contain catastrophic states that an optimal agent
would visit infrequently or never. Even on toy problems, Deep Reinforcement
Learning (DRL) agents tend to periodically revisit these states upon forgetting
their existence under a new policy. We introduce intrinsic fear (IF), a learned
reward shaping that guards DRL agents against periodic catastrophes. IF agents
possess a fear model trained to predict the probability of imminent
catastrophe. This score is then used to penalize the Q-learning objective. Our
theoretical analysis bounds the reduction in average return due to learning on
the perturbed objective. We also prove robustness to classification errors. As
a bonus, IF models tend to learn faster, owing to reward shaping. Experiments
demonstrate that intrinsic-fear DQNs solve otherwise pathological environments
and improve on several Atari games. | http://arxiv.org/pdf/1611.01211 | Zachary C. Lipton, Kamyar Azizzadenesheli, Abhishek Kumar, Lihong Li, Jianfeng Gao, Li Deng | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20161103 | 20180313 | [
{
"id": "1802.04412"
}
] |
1611.01224 | 4 | ACER capitalizes on recent advances in deep neural networks, variance reduction techniques, the off-policy Retrace algorithm (Munos et al., 2016) and parallel training of RL agents (Mnih et al., 2016). Yet, crucially, its success hinges on innovations advanced in this paper: truncated importance sampling with bias correction, stochastic dueling network architectures, and efï¬cient trust region policy optimization.
On the theoretical front, the paper proves that the Retrace operator can be rewritten from our proposed truncated importance sampling with bias correction technique.
# 2 BACKGROUND AND PROBLEM SETUP
Consider an agent interacting with its environment over discrete time steps. At time step t, the agent Rnx, chooses an action at according to a policy observes the nx-dimensional state vector xt â X â R produced by the environment. We will consider discrete xt) and observes a reward signal rt â Ï(a | Rna in Section 5. actions at â { 1, 2, . . . , Na} iâ¥0 γirt+i in expectation. The The goal of the agent is to maximize the discounted return Rt = discount factor γ [0, 1) trades-off the importance of immediate and future rewards. For an agent following policy Ï, we use the standard deï¬nitions of the state-action and state only value functions: | 1611.01224#4 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01144 | 5 | a) 5 Categorical 7T=1.0 = 10.0 i a a la a __. b) i | | L | L L â_ category
Figure 1: The Gumbel-Softmax distribution interpolates between discrete one-hot-encoded categor- ical distributions and continuous categorical densities. (a) For low temperatures (Ï = 0.1, Ï = 0.5), the expected value of a Gumbel-Softmax random variable approaches the expected value of a cate- gorical random variable with the same logits. As the temperature increases (Ï = 1.0, Ï = 10.0), the expected value converges to a uniform distribution over the categories. (b) Samples from Gumbel- Softmax distributions are identical to samples from a categorical distribution as Ï â 0. At higher temperatures, Gumbel-Softmax samples are no longer one-hot, and become uniform as Ï â â.
2.1 GUMBEL-SOFTMAX ESTIMATOR
The Gumbel-Softmax distribution is smooth for Ï > 0, and therefore has a well-deï¬ned gradi- ent ây/âÏ with respect to the parameters Ï. Thus, by replacing categorical samples with Gumbel- Softmax samples we can use backpropagation to compute gradients (see Section 3.1). We denote | 1611.01144#5 | Categorical Reparameterization with Gumbel-Softmax | Categorical variables are a natural choice for representing discrete
structure in the world. However, stochastic neural networks rarely use
categorical latent variables due to the inability to backpropagate through
samples. In this work, we present an efficient gradient estimator that replaces
the non-differentiable sample from a categorical distribution with a
differentiable sample from a novel Gumbel-Softmax distribution. This
distribution has the essential property that it can be smoothly annealed into a
categorical distribution. We show that our Gumbel-Softmax estimator outperforms
state-of-the-art gradient estimators on structured output prediction and
unsupervised generative modeling tasks with categorical latent variables, and
enables large speedups on semi-supervised classification. | http://arxiv.org/pdf/1611.01144 | Eric Jang, Shixiang Gu, Ben Poole | stat.ML, cs.LG | null | null | stat.ML | 20161103 | 20170805 | [
{
"id": "1602.06725"
},
{
"id": "1512.00567"
},
{
"id": "1609.01704"
}
] |
1611.01211 | 5 | To combat these problems, we propose the intrinsic fear (IF) algorithm. In this approach, we train a supervised fear model that predicts which states are likely to lead to a catastrophe within kr steps. The output of the fear model (a probability), scaled by a fear factor penalizes the Q-learning target. Crucially, the fear model maintains buffers of both safe and danger states. This model never forgets danger states, which is possible due to the infrequency of catastrophes.
We validate the approach both empirically and theoretically. Our experiments address Adventure Seeker, Cartpole, and several Atari games. In these environments, we label every lost life as a catastrophe. On the toy environments, IF agents learns to avoid catastrophe indefinitely. In Seaquest experiments, the IF agent achieves higher reward and in Asteroids, the IF agent achieves both higher reward and fewer catastrophes. The improvement on Freeway is most dramatic.
We also make the following theoretical contributions: First, we prove that when the reward is bounded and the optimal policy rarely visits the danger states, an optimal policy learned on the perturbed reward function has approximately the same return as the optimal policy learned on the original value function. Second, we prove that our method is robust to noise in the danger model.
2
# Intrinsic fear | 1611.01211#5 | Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear | Many practical environments contain catastrophic states that an optimal agent
would visit infrequently or never. Even on toy problems, Deep Reinforcement
Learning (DRL) agents tend to periodically revisit these states upon forgetting
their existence under a new policy. We introduce intrinsic fear (IF), a learned
reward shaping that guards DRL agents against periodic catastrophes. IF agents
possess a fear model trained to predict the probability of imminent
catastrophe. This score is then used to penalize the Q-learning objective. Our
theoretical analysis bounds the reduction in average return due to learning on
the perturbed objective. We also prove robustness to classification errors. As
a bonus, IF models tend to learn faster, owing to reward shaping. Experiments
demonstrate that intrinsic-fear DQNs solve otherwise pathological environments
and improve on several Atari games. | http://arxiv.org/pdf/1611.01211 | Zachary C. Lipton, Kamyar Azizzadenesheli, Abhishek Kumar, Lihong Li, Jianfeng Gao, Li Deng | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20161103 | 20180313 | [
{
"id": "1802.04412"
}
] |
1611.01224 | 5 | # in Sections 3 and 4, and continuous actions at â A â
QÏ(xt, at) = Ext+1:â,at+1:â [ Rt|
V Ï(xt) = Eat [QÏ(xt, at) |
xt, at] and xt] .
Here, the expectations are with respect to the observed environment states xt and the actions generated by the policy Ï, where xt+1:â denotes a state trajectory starting at time t + 1. We also need to deï¬ne the advantage function AÏ(xt, at) = QÏ(xt, at) relative measure of value of each action since Eat [AÏ(xt, at)] = 0. xt) can be updated using the discounted approxi- The parameters θ of the differentiable policy Ïθ(at| mation to the policy gradient (Sutton et al., 2000), which borrowing notation from Schulman et al. (2015b), is deï¬ned as:
AÏ(xt, at) âθ log Ïθ(at| xt) . (1)
9 = Exp.2 00.0 | t>0 ofjSchulman et | 1611.01224#5 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01144 | 6 | 1The Gumbel(0, 1) distribution can be sampled using inverse transform sampling by drawing u â¼ Uniform(0, 1) and computing g = â log(â log(u)).
2
Published as a conference paper at ICLR 2017
this procedure of replacing non-differentiable categorical samples with a differentiable approxima- tion during training as the Gumbel-Softmax estimator.
While Gumbel-Softmax samples are differentiable, they are not identical to samples from the corre- sponding categorical distribution for non-zero temperature. For learning, there is a tradeoff between small temperatures, where samples are close to one-hot but the variance of the gradients is large, and large temperatures, where samples are smooth but the variance of the gradients is small (Figure 1). In practice, we start at a high temperature and anneal to a small but non-zero temperature. | 1611.01144#6 | Categorical Reparameterization with Gumbel-Softmax | Categorical variables are a natural choice for representing discrete
structure in the world. However, stochastic neural networks rarely use
categorical latent variables due to the inability to backpropagate through
samples. In this work, we present an efficient gradient estimator that replaces
the non-differentiable sample from a categorical distribution with a
differentiable sample from a novel Gumbel-Softmax distribution. This
distribution has the essential property that it can be smoothly annealed into a
categorical distribution. We show that our Gumbel-Softmax estimator outperforms
state-of-the-art gradient estimators on structured output prediction and
unsupervised generative modeling tasks with categorical latent variables, and
enables large speedups on semi-supervised classification. | http://arxiv.org/pdf/1611.01144 | Eric Jang, Shixiang Gu, Ben Poole | stat.ML, cs.LG | null | null | stat.ML | 20161103 | 20170805 | [
{
"id": "1602.06725"
},
{
"id": "1512.00567"
},
{
"id": "1609.01704"
}
] |
1611.01211 | 6 | 2
# Intrinsic fear
An agent interacts with its environment via a Markov decision process, or MDP, (S, A,7,R, y). At each step t, the agent observes a state s ⬠S and then chooses an action a ⬠A according to its policy 7. The environment then transitions to state s;,, ⬠S according to transition dynamics 7 (s;+1|s;, ay) and generates a reward r; with expectation R(s, a). This cycle continues until each episode terminates. An agent seeks to maximize the cumulative discounted return _, y'r;. Temporal-differences methods [31] like Q-learning [33] model the Q-function, which gives the optimal discounted total reward of a state-action pair. Problems of practical interest tend to have large state spaces, thus the Q-function is typically approximated by parametric models such as neural networks.
In Q-learning with function approximation, an agent collects experiences by acting greedily with respect to Q(s, a; θQ ) and updates its parameters θQ . Updates proceed as follows. For a given experience (st , at , rt , st +1), we minimize the squared Bellman error:
L = (Q(st , at ; θQ ) â yt )2 (1) | 1611.01211#6 | Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear | Many practical environments contain catastrophic states that an optimal agent
would visit infrequently or never. Even on toy problems, Deep Reinforcement
Learning (DRL) agents tend to periodically revisit these states upon forgetting
their existence under a new policy. We introduce intrinsic fear (IF), a learned
reward shaping that guards DRL agents against periodic catastrophes. IF agents
possess a fear model trained to predict the probability of imminent
catastrophe. This score is then used to penalize the Q-learning objective. Our
theoretical analysis bounds the reduction in average return due to learning on
the perturbed objective. We also prove robustness to classification errors. As
a bonus, IF models tend to learn faster, owing to reward shaping. Experiments
demonstrate that intrinsic-fear DQNs solve otherwise pathological environments
and improve on several Atari games. | http://arxiv.org/pdf/1611.01211 | Zachary C. Lipton, Kamyar Azizzadenesheli, Abhishek Kumar, Lihong Li, Jianfeng Gao, Li Deng | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20161103 | 20180313 | [
{
"id": "1802.04412"
}
] |
1611.01224 | 6 | 9 = Exp.2 00.0 | t>0 ofjSchulman et
Following Proposition 1 of Schulman et al. (2015b), we can replace AÏ(xt, at) in the above expression with the state-action value QÏ(xt, at), the discounted return Rt, or the temporal difference residual V Ï(xt), without introducing bias. These choices will however have different rt + γV Ï(xt+1) variance. Moreover, in practice we will approximate these quantities with neural networks thus introducing additional approximation errors and biases. Typically, the policy gradient estimator using Rt will have higher variance and lower bias whereas the estimators using function approximation will have higher bias and lower variance. Combining Rt with the current value function approximation to minimize bias while maintaining bounded variance is one of the central design principles behind ACER.
To trade-off bias and variance, the asynchronous advantage actor critic (A3C) of Mnih et al. (2016) uses a single trajectory sample to obtain the following gradient approximation:
k-1 ge = > ((: vn + VG (14K) - vite) Vo log ma(a: |). (2) t>0 i=0
t>0
i=0
A3C combines both k-step returns and function approximation to trade-off variance and bias. We may think of V Ï Î¸v | 1611.01224#6 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01144 | 7 | In our experiments, we ï¬nd that the softmax temperature Ï can be annealed according to a variety of schedules and still perform well. If Ï is a learned parameter (rather than annealed via a ï¬xed schedule), this scheme can be interpreted as entropy regularization (Szegedy et al., 2015; Pereyra et al., 2016), where the Gumbel-Softmax distribution can adaptively adjust the âconï¬denceâ of proposed samples during the training process.
2.2 STRAIGHT-THROUGH GUMBEL-SOFTMAX ESTIMATOR
Continuous relaxations of one-hot vectors are suitable for problems such as learning hidden repre- sentations and sequence modeling. For scenarios in which we are constrained to sampling discrete values (e.g. from a discrete action space for reinforcement learning, or quantized compression), we discretize y using arg max but use our continuous approximation in the backward pass by approxi- mating âθz â âθy. We call this the Straight-Through (ST) Gumbel Estimator, as it is reminiscent of the biased path derivative estimator described in Bengio et al. (2013). ST Gumbel-Softmax allows samples to be sparse even when the temperature Ï is high.
# 3 RELATED WORK | 1611.01144#7 | Categorical Reparameterization with Gumbel-Softmax | Categorical variables are a natural choice for representing discrete
structure in the world. However, stochastic neural networks rarely use
categorical latent variables due to the inability to backpropagate through
samples. In this work, we present an efficient gradient estimator that replaces
the non-differentiable sample from a categorical distribution with a
differentiable sample from a novel Gumbel-Softmax distribution. This
distribution has the essential property that it can be smoothly annealed into a
categorical distribution. We show that our Gumbel-Softmax estimator outperforms
state-of-the-art gradient estimators on structured output prediction and
unsupervised generative modeling tasks with categorical latent variables, and
enables large speedups on semi-supervised classification. | http://arxiv.org/pdf/1611.01144 | Eric Jang, Shixiang Gu, Ben Poole | stat.ML, cs.LG | null | null | stat.ML | 20161103 | 20170805 | [
{
"id": "1602.06725"
},
{
"id": "1512.00567"
},
{
"id": "1609.01704"
}
] |
1611.01211 | 7 | L = (Q(st , at ; θQ ) â yt )2 (1)
for ys = ry + y - maxgâ Q(S;41, 4â; OQ). Traditionally, the parameterised Q(s, a; @) is trained by stochastic approximation, estimating the loss on each experience as it is encountered, yielding the update:
θt +1 âθt + α(yt â Q(st , at ; θt ))âQ(st , at ; θt ) . (2)
Q-learning methods also require an exploration strategy for action selection. For simplicity, we consider only the ϵ-greedy heuristic. A few tricks help to stabilize Q-learning with function approximation. Notably, with experience replay [18], the RL agent maintains a buffer of experiences, of experience to update the Q-function. | 1611.01211#7 | Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear | Many practical environments contain catastrophic states that an optimal agent
would visit infrequently or never. Even on toy problems, Deep Reinforcement
Learning (DRL) agents tend to periodically revisit these states upon forgetting
their existence under a new policy. We introduce intrinsic fear (IF), a learned
reward shaping that guards DRL agents against periodic catastrophes. IF agents
possess a fear model trained to predict the probability of imminent
catastrophe. This score is then used to penalize the Q-learning objective. Our
theoretical analysis bounds the reduction in average return due to learning on
the perturbed objective. We also prove robustness to classification errors. As
a bonus, IF models tend to learn faster, owing to reward shaping. Experiments
demonstrate that intrinsic-fear DQNs solve otherwise pathological environments
and improve on several Atari games. | http://arxiv.org/pdf/1611.01211 | Zachary C. Lipton, Kamyar Azizzadenesheli, Abhishek Kumar, Lihong Li, Jianfeng Gao, Li Deng | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20161103 | 20180313 | [
{
"id": "1802.04412"
}
] |
1611.01224 | 7 | t>0
i=0
A3C combines both k-step returns and function approximation to trade-off variance and bias. We may think of V Ï Î¸v
In the following section, we will introduce the discrete-action version of ACER. ACER may be understood as the off-policy counterpart of the A3C method of Mnih et al. (2016). As such, ACER builds on all the engineering innovations of A3C, including efï¬cient parallel CPU computation.
2
Published as a conference paper at ICLR 2017
xt) and the value function ACER uses a single deep neural network to estimate the policy Ïθ(at| V Ï (xt). (For clarity and generality, we are using two different symbols to denote the parameters of θv the policy and value function, θ and θv, but most of these parameters are shared in the single neural network.) Our neural networks, though building on the networks used in A3C, will introduce several modiï¬cations and new modules.
# 3 DISCRETE ACTOR CRITIC WITH EXPERIENCE REPLAY | 1611.01224#7 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01144 | 8 | # 3 RELATED WORK
In this section we review existing stochastic gradient estimation techniques for discrete variables (illustrated in Figure 2). Consider a stochastic computation graph (Schulman et al., 2015) with discrete random variable z whose distribution depends on parameter θ, and cost function f (z). The objective is to minimize the expected cost L(θ) = Ezâ¼pθ(z)[f (z)] via gradient descent, which requires us to estimate âθEzâ¼pθ(z)[f (z)].
3.1 PATH DERIVATIVE GRADIENT ESTIMATORS
For distributions that are reparameterizable, we can compute the sample z as a deterministic function g of the parameters 6 and an independent random variable ¢, so that z = g(0,¢). The path-wise gradients from f to @ can then be computed without encountering any stochastic nodes:
0 (a) Of Og âE.~ z))| = âE,. 0,â¬))] =Eenp, | 4 SpEsr LF) = Fy C2] = Bony, [SE ) | 1611.01144#8 | Categorical Reparameterization with Gumbel-Softmax | Categorical variables are a natural choice for representing discrete
structure in the world. However, stochastic neural networks rarely use
categorical latent variables due to the inability to backpropagate through
samples. In this work, we present an efficient gradient estimator that replaces
the non-differentiable sample from a categorical distribution with a
differentiable sample from a novel Gumbel-Softmax distribution. This
distribution has the essential property that it can be smoothly annealed into a
categorical distribution. We show that our Gumbel-Softmax estimator outperforms
state-of-the-art gradient estimators on structured output prediction and
unsupervised generative modeling tasks with categorical latent variables, and
enables large speedups on semi-supervised classification. | http://arxiv.org/pdf/1611.01144 | Eric Jang, Shixiang Gu, Ben Poole | stat.ML, cs.LG | null | null | stat.ML | 20161103 | 20170805 | [
{
"id": "1602.06725"
},
{
"id": "1512.00567"
},
{
"id": "1609.01704"
}
] |
1611.01211 | 8 | We propose a new formulation: Suppose there exists a subset C C S of known catastrophe states/ And assume that for a given environment, the optimal policy rarely enters from which catastrophe states are reachable in a short number of steps. We define the distance d(s;, s;) to be length N of the smallest sequence of transitions {(s;, a;,1;, Sra}, that traverses state space from s; to s;.' Definition 2.1. Suppose a priori knowledge that acting according to the optimal policy z*, an agent rarely encounters states s ⬠S that lie within distance d(s,c) < k; for any catastrophe state c ⬠C. Then each state s for which Hc ⬠C s.t. d(s,c) < k; is a danger state. In Algorithm 1, the agent maintains both a DQN and a separate, supervised fear model F : S + [0,1]. F provides an auxiliary source of reward, penalizing the Q-learner for entering likely danger states. In our case, we use a neural network of the same architecture as the DON (but for the output layer). While one could sharing weights between the two networks, such tricks are not relevant to this paperâs contribution. | 1611.01211#8 | Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear | Many practical environments contain catastrophic states that an optimal agent
would visit infrequently or never. Even on toy problems, Deep Reinforcement
Learning (DRL) agents tend to periodically revisit these states upon forgetting
their existence under a new policy. We introduce intrinsic fear (IF), a learned
reward shaping that guards DRL agents against periodic catastrophes. IF agents
possess a fear model trained to predict the probability of imminent
catastrophe. This score is then used to penalize the Q-learning objective. Our
theoretical analysis bounds the reduction in average return due to learning on
the perturbed objective. We also prove robustness to classification errors. As
a bonus, IF models tend to learn faster, owing to reward shaping. Experiments
demonstrate that intrinsic-fear DQNs solve otherwise pathological environments
and improve on several Atari games. | http://arxiv.org/pdf/1611.01211 | Zachary C. Lipton, Kamyar Azizzadenesheli, Abhishek Kumar, Lihong Li, Jianfeng Gao, Li Deng | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20161103 | 20180313 | [
{
"id": "1802.04412"
}
] |
1611.01224 | 8 | # 3 DISCRETE ACTOR CRITIC WITH EXPERIENCE REPLAY
Off-policy learning with experience replay may appear to be an obvious strategy for improving the sample efï¬ciency of actor-critics. However, controlling the variance and stability of off-policy estimators is notoriously hard. Importance sampling is one of the most popular approaches for off- policy learning (Meuleau et al., 2000; Jie & Abbeel, 2010; Levine & Koltun, 2013). In our context, it , proceeds as follows. Suppose we retrieve a trajectory xk) } where the actions have been sampled according to the behavior policy µ, from our memory of experiences. Then, the importance weighted policy gradient is given by:
k k /k gm = (1 a) > (> a) Vo log mo(at|x2), (3) 1=0 t=0 \i=0
t=0
1=0
\i=0 | 1611.01224#8 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01144 | 9 | For example, the normal distribution z â¼ N (µ, Ï) can be re-written as µ + Ï Â· N (0, 1), making it trivial to compute âz/âµ and âz/âÏ. This reparameterization trick is commonly applied to train- ing variational autooencoders with continuous latent variables using backpropagation (Kingma & Welling, 2013; Rezende et al., 2014b). As shown in Figure 2, we exploit such a trick in the con- struction of the Gumbel-Softmax estimator. | 1611.01144#9 | Categorical Reparameterization with Gumbel-Softmax | Categorical variables are a natural choice for representing discrete
structure in the world. However, stochastic neural networks rarely use
categorical latent variables due to the inability to backpropagate through
samples. In this work, we present an efficient gradient estimator that replaces
the non-differentiable sample from a categorical distribution with a
differentiable sample from a novel Gumbel-Softmax distribution. This
distribution has the essential property that it can be smoothly annealed into a
categorical distribution. We show that our Gumbel-Softmax estimator outperforms
state-of-the-art gradient estimators on structured output prediction and
unsupervised generative modeling tasks with categorical latent variables, and
enables large speedups on semi-supervised classification. | http://arxiv.org/pdf/1611.01144 | Eric Jang, Shixiang Gu, Ben Poole | stat.ML, cs.LG | null | null | stat.ML | 20161103 | 20170805 | [
{
"id": "1602.06725"
},
{
"id": "1512.00567"
},
{
"id": "1609.01704"
}
] |
1611.01211 | 9 | We train the fear model to predict the probability that any state will lead to catastrophe within k moves. Over the course of training, our agent adds each experience (s, a, r,sâ) to its experience replay buffer. Whenever a catastrophe is reached at, say, the n;p turn of an episode, we add the preceding k, (fear radius) states to a danger buffer. We add the first n â k, states of that episode to a safe buffer. When n < k,, all states for that episode are added to the list of danger states. Then after each turn, in addition to updating the Q-network, we update the fear model, sampling 50% of states from the danger buffer, assigning them label 1, and the remaining 50% from the safe buffer, assigning them label 0.
1In the stochastic dynamics setting, the distance is the minimum mean passing time between the states.
3
Algorithm 1 Training DQN with Intrinsic Fear | 1611.01211#9 | Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear | Many practical environments contain catastrophic states that an optimal agent
would visit infrequently or never. Even on toy problems, Deep Reinforcement
Learning (DRL) agents tend to periodically revisit these states upon forgetting
their existence under a new policy. We introduce intrinsic fear (IF), a learned
reward shaping that guards DRL agents against periodic catastrophes. IF agents
possess a fear model trained to predict the probability of imminent
catastrophe. This score is then used to penalize the Q-learning objective. Our
theoretical analysis bounds the reduction in average return due to learning on
the perturbed objective. We also prove robustness to classification errors. As
a bonus, IF models tend to learn faster, owing to reward shaping. Experiments
demonstrate that intrinsic-fear DQNs solve otherwise pathological environments
and improve on several Atari games. | http://arxiv.org/pdf/1611.01211 | Zachary C. Lipton, Kamyar Azizzadenesheli, Abhishek Kumar, Lihong Li, Jianfeng Gao, Li Deng | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20161103 | 20180313 | [
{
"id": "1802.04412"
}
] |
1611.01224 | 9 | t=0
1=0
\i=0
where Ït = Ï(at|xt) µ(at|xt) denotes the importance weight. This estimator is unbiased, but it suffers from very high variance as it involves a product of many potentially unbounded importance weights. To prevent the product of importance weights from exploding, Wawrzy´nski (2009) truncates this product. Truncated importance sampling over entire trajectories, although bounded in variance, could suffer from signiï¬cant bias.
Recently, Degris et al. (2012) attacked this problem by using marginal value functions over the limiting distribution of the process to yield the following approximation of the gradient:
# gmarg = Extâ¼Î²,atâ¼Âµ [Ïtâθ log Ïθ(at|
xt)QÏ(xt, at)] , (4) | 1611.01224#9 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01144 | 10 | Biased path derivative estimators can be utilized even when z is not reparameterizable. In general, we can approximate âθz â âθm(θ), where m is a differentiable proxy for the stochastic sample. For Bernoulli variables with mean parameter θ, the Straight-Through (ST) estimator (Bengio et al., 2013) approximates m = µθ(z), implying âθm = 1. For k = 2 (Bernoulli), ST Gumbel-Softmax is similar to the slope-annealed Straight-Through estimator proposed by Chung et al. (2016), but uses a softmax instead of a hard sigmoid to determine the slope. Rolfe (2016) considers an al- ternative approach where each binary latent variable parameterizes a continuous mixture model. Reparameterization gradients are obtained by backpropagating through the continuous variables and marginalizing out the binary variables.
One limitation of the ST estimator is that backpropagating with respect to the sample-independent mean may cause discrepancies between the forward and backward pass, leading to higher variance.
3
Published as a conference paper at ICLR 2017
6) os" aly < 6: detent, differentiable node Siochastie node Forward pass alogPy(Â¥) a0 Backpropagation
# a)
<>
# CE ' J | 1611.01144#10 | Categorical Reparameterization with Gumbel-Softmax | Categorical variables are a natural choice for representing discrete
structure in the world. However, stochastic neural networks rarely use
categorical latent variables due to the inability to backpropagate through
samples. In this work, we present an efficient gradient estimator that replaces
the non-differentiable sample from a categorical distribution with a
differentiable sample from a novel Gumbel-Softmax distribution. This
distribution has the essential property that it can be smoothly annealed into a
categorical distribution. We show that our Gumbel-Softmax estimator outperforms
state-of-the-art gradient estimators on structured output prediction and
unsupervised generative modeling tasks with categorical latent variables, and
enables large speedups on semi-supervised classification. | http://arxiv.org/pdf/1611.01144 | Eric Jang, Shixiang Gu, Ben Poole | stat.ML, cs.LG | null | null | stat.ML | 20161103 | 20170805 | [
{
"id": "1602.06725"
},
{
"id": "1512.00567"
},
{
"id": "1609.01704"
}
] |
1611.01211 | 10 | 1In the stochastic dynamics setting, the distance is the minimum mean passing time between the states.
3
Algorithm 1 Training DQN with Intrinsic Fear
1: Input: Q (DQN), F (fear model), fear factor λ, fear phase-in length kλ, fear radius kr 2: Output: Learned parameters θQ and θF 3: Initialize parameters θQ and θF randomly 4: Initialize replay buffer D, danger state buffer DD , and safe state buffer DS 5: Start per-episode turn counter ne 6: for t in 1:T do 7:
With probability ϵ select random action at Otherwise, select a greedy action at = arg maxa Q(st , a; θQ ) Execute action at in environment, observing reward rt and successor state st +1 Store transition (st , at , rt , st +1) in D if st +1 is a catastrophe state then
8: 9: 10: 11: 12:
Add states st âkr through st to DD
else
13: 14:
Add states st âne through st âkr â1 to DS
14: Add states s;_y, through s;-,,-1 to Ds
Sample a random mini-batch of transitions (sÏ , aÏ , rÏ , sÏ +1) from D Î»Ï â min(λ, λ ·t kλ | 1611.01211#10 | Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear | Many practical environments contain catastrophic states that an optimal agent
would visit infrequently or never. Even on toy problems, Deep Reinforcement
Learning (DRL) agents tend to periodically revisit these states upon forgetting
their existence under a new policy. We introduce intrinsic fear (IF), a learned
reward shaping that guards DRL agents against periodic catastrophes. IF agents
possess a fear model trained to predict the probability of imminent
catastrophe. This score is then used to penalize the Q-learning objective. Our
theoretical analysis bounds the reduction in average return due to learning on
the perturbed objective. We also prove robustness to classification errors. As
a bonus, IF models tend to learn faster, owing to reward shaping. Experiments
demonstrate that intrinsic-fear DQNs solve otherwise pathological environments
and improve on several Atari games. | http://arxiv.org/pdf/1611.01211 | Zachary C. Lipton, Kamyar Azizzadenesheli, Abhishek Kumar, Lihong Li, Jianfeng Gao, Li Deng | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20161103 | 20180313 | [
{
"id": "1802.04412"
}
] |
1611.01224 | 10 | xt)QÏ(xt, at)] , (4)
where Extâ¼Î²,atâ¼Âµ[ to the limiting distribution β(x) = ] · x0, µ) with behavior policy µ. To keep the notation succinct, we will replace limtââ P (xt = x | Extâ¼Î²,atâ¼Âµ[ ] with Extat[ · Two important facts about equation (4) must be highlighted. First, note that it depends on QÏ and not on Qµ, consequently we must be able to estimate QÏ. Second, we no longer have a product of importance weights, but instead only need to estimate the marginal importance weight Ït. Importance sampling in this lower dimensional space (over marginals as opposed to trajectories) is expected to exhibit lower variance.
] and ensure we remind readers of this when necessary. ·
Degris et al. (2012) estimate QÏ in equation (4) using lambda returns: Rλ λ)γV (xt+1) + λγÏt+1Rλ t+1. This estimator requires that we know how to choose λ ahead of time to trade off bias and variance. Moreover, when using small values of λ to reduce variance, occasional large importance weights can still cause instability. | 1611.01224#10 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01144 | 11 | # a)
<>
# CE ' J
Figure 2: Gradient estimation in stochastic computation graphs. (1) âθf (x) can be computed via backpropagation if x(θ) is deterministic and differentiable. (2) The presence of stochastic node z precludes backpropagation as the sampler function does not have a well-deï¬ned gradient. (3) The score function estimator and its variants (NVIL, DARN, MuProp, VIMCO) obtain an unbiased estimate of âθf (x) by backpropagating along a surrogate loss Ëf log pθ(z), where Ëf = f (x) â b and b is a baseline for variance reduction. (4) The Straight-Through estimator, developed primarily for Bernoulli variables, approximates âθz â 1. (5) Gumbel-Softmax is a path derivative estimator for a continuous distribution y that approximates z. Reparameterization allows gradients to ï¬ow from f (y) to θ. y can be annealed to one-hot categorical variables over the course of training.
Gumbel-Softmax avoids this problem because each sample y is a differentiable proxy of the corre- sponding discrete sample z. | 1611.01144#11 | Categorical Reparameterization with Gumbel-Softmax | Categorical variables are a natural choice for representing discrete
structure in the world. However, stochastic neural networks rarely use
categorical latent variables due to the inability to backpropagate through
samples. In this work, we present an efficient gradient estimator that replaces
the non-differentiable sample from a categorical distribution with a
differentiable sample from a novel Gumbel-Softmax distribution. This
distribution has the essential property that it can be smoothly annealed into a
categorical distribution. We show that our Gumbel-Softmax estimator outperforms
state-of-the-art gradient estimators on structured output prediction and
unsupervised generative modeling tasks with categorical latent variables, and
enables large speedups on semi-supervised classification. | http://arxiv.org/pdf/1611.01144 | Eric Jang, Shixiang Gu, Ben Poole | stat.ML, cs.LG | null | null | stat.ML | 20161103 | 20170805 | [
{
"id": "1602.06725"
},
{
"id": "1512.00567"
},
{
"id": "1609.01704"
}
] |
1611.01211 | 11 | Sample a random mini-batch of transitions (sÏ , aÏ , rÏ , sÏ +1) from D Î»Ï â min(λ, λ ·t kλ
15: 16:
# ) for terminal sÏ +1 : rÏ â λÏ
16: A; â min(A, e)
for terminal s,4; : Tr â Ar 17: yr â 4 for non-terminal s,+; : rp + maxg Q(s;41,.4â; Og)â A+ F(sc+13 OF) 18: 09 â 09-9 Vog(Yr â Ose, 475 OQ)? 19: Sample random mini-batch s; with 50% of examples from Dp and 50% from Ds 1, fors;⬠Dp yi 0, fors; ⬠Ds 21: Or â Or â 1 - Vo, lossr(y;, F(s;; OF) 20:
For each update to the DON, we perturb the TD target y;. Instead of updating Q(s;, a;;69) towards r; + maxq QO(s;41, 4â; 99), we modify the target by subtracting the intrinsic fear:
yy are max Q(Sri, aâ; 09) â A+ F(se413 OF) (3) | 1611.01211#11 | Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear | Many practical environments contain catastrophic states that an optimal agent
would visit infrequently or never. Even on toy problems, Deep Reinforcement
Learning (DRL) agents tend to periodically revisit these states upon forgetting
their existence under a new policy. We introduce intrinsic fear (IF), a learned
reward shaping that guards DRL agents against periodic catastrophes. IF agents
possess a fear model trained to predict the probability of imminent
catastrophe. This score is then used to penalize the Q-learning objective. Our
theoretical analysis bounds the reduction in average return due to learning on
the perturbed objective. We also prove robustness to classification errors. As
a bonus, IF models tend to learn faster, owing to reward shaping. Experiments
demonstrate that intrinsic-fear DQNs solve otherwise pathological environments
and improve on several Atari games. | http://arxiv.org/pdf/1611.01211 | Zachary C. Lipton, Kamyar Azizzadenesheli, Abhishek Kumar, Lihong Li, Jianfeng Gao, Li Deng | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20161103 | 20180313 | [
{
"id": "1802.04412"
}
] |
1611.01224 | 11 | In the following subsection, we adopt the Retrace algorithm of Munos et al. (2016) to estimate QÏ. Subsequently, we propose an importance weight truncation technique to improve the stability of the off-policy actor critic of Degris et al. (2012), and introduce a computationally efï¬cient trust region scheme for policy optimization. The formulation of ACER for continuous action spaces will require further innovations that are advanced in Section 5.
3.1 MULTI-STEP ESTIMATION OF THE STATE-ACTION VALUE FUNCTION
In this paper, we estimate QÏ(xt, at) using Retrace (Munos et al., 2016). (We also experimented with the related tree backup method of Precup et al. (2000) but found Retrace to perform better in practice.) Given a trajectory generated under the behavior policy µ, the Retrace estimator can be expressed recursively as follows1:
Qret(xt, at) = rt + γ ¯Ït+1[Qret(xt+1, at+1) Q(xt+1, at+1)] + γV (xt+1), (5)
â 1For ease of presentation, we consider only λ = 1 for Retrace.
3
Published as a conference paper at ICLR 2017 | 1611.01224#11 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01144 | 12 | Gumbel-Softmax avoids this problem because each sample y is a differentiable proxy of the corre- sponding discrete sample z.
3.2 SCORE FUNCTION-BASED GRADIENT ESTIMATORS
The score function estimator (SF, also referred to as REINFORCE (Williams, 1992) and likelihood ratio estimator (Glynn, 1990)) uses the identity âθpθ(z) = pθ(z)âθ log pθ(z) to derive the follow- ing unbiased estimator:
âθEz [f (z)] = Ez [f (z)âθ log pθ(z)] (5)
SF only requires that pθ(z) is continuous in θ, and does not require backpropagating through f or the sample z. However, SF suffers from high variance and is consequently slow to converge. In particular, the variance of SF scales linearly with the number of dimensions of the sample vector (Rezende et al., 2014a), making it especially challenging to use for categorical distributions.
The variance of a score function estimator can be reduced by subtracting a control variate b(z) from the learning signal f , and adding back its analytical expectation µb = Ez [b(z)âθ log pθ(z)] to keep the estimator unbiased: | 1611.01144#12 | Categorical Reparameterization with Gumbel-Softmax | Categorical variables are a natural choice for representing discrete
structure in the world. However, stochastic neural networks rarely use
categorical latent variables due to the inability to backpropagate through
samples. In this work, we present an efficient gradient estimator that replaces
the non-differentiable sample from a categorical distribution with a
differentiable sample from a novel Gumbel-Softmax distribution. This
distribution has the essential property that it can be smoothly annealed into a
categorical distribution. We show that our Gumbel-Softmax estimator outperforms
state-of-the-art gradient estimators on structured output prediction and
unsupervised generative modeling tasks with categorical latent variables, and
enables large speedups on semi-supervised classification. | http://arxiv.org/pdf/1611.01144 | Eric Jang, Shixiang Gu, Ben Poole | stat.ML, cs.LG | null | null | stat.ML | 20161103 | 20170805 | [
{
"id": "1602.06725"
},
{
"id": "1512.00567"
},
{
"id": "1609.01704"
}
] |
1611.01211 | 12 | yy are max Q(Sri, aâ; 09) â A+ F(se413 OF) (3)
where F (s; θF ) is the fear model and λ is a fear factor determining the scale of the impact of intrinsic fear on the Q-function update.
# 3 Analysis
Note that IF perturbs the objective function. Thus, one might be concerned that the perturbed reward might lead to a sub-optimal policy. Fortunately, as we will show formally, if the labeled catastrophe states and danger zone do not violate our assumptions, and if the fear model reaches arbitrarily high accuracy, then this will not happen. For an MDP, M = (S,A,7,R,y), with 0 < y < 1, the average reward return is as follows:
4
limyâoo #2m| yr rela if y=1 qo (1) = a= yEm| EP y'nla| if 0<y<1 | 1611.01211#12 | Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear | Many practical environments contain catastrophic states that an optimal agent
would visit infrequently or never. Even on toy problems, Deep Reinforcement
Learning (DRL) agents tend to periodically revisit these states upon forgetting
their existence under a new policy. We introduce intrinsic fear (IF), a learned
reward shaping that guards DRL agents against periodic catastrophes. IF agents
possess a fear model trained to predict the probability of imminent
catastrophe. This score is then used to penalize the Q-learning objective. Our
theoretical analysis bounds the reduction in average return due to learning on
the perturbed objective. We also prove robustness to classification errors. As
a bonus, IF models tend to learn faster, owing to reward shaping. Experiments
demonstrate that intrinsic-fear DQNs solve otherwise pathological environments
and improve on several Atari games. | http://arxiv.org/pdf/1611.01211 | Zachary C. Lipton, Kamyar Azizzadenesheli, Abhishek Kumar, Lihong Li, Jianfeng Gao, Li Deng | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20161103 | 20180313 | [
{
"id": "1802.04412"
}
] |
1611.01224 | 12 | â 1For ease of presentation, we consider only λ = 1 for Retrace.
3
Published as a conference paper at ICLR 2017
where ¯Ït is the truncated importance weight, ¯Ït = min µ(at|xt) , Q is the current value estimate of QÏ, and V (x) = Eaâ¼ÏQ(x, a). Retrace is an off-policy, return-based algorithm which has low variance and is proven to converge (in the tabular case) to the value function of the target policy for any behavior policy, see Munos et al. (2016). | 1611.01224#12 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01144 | 13 | âθEz [f (z)] = Ez [f (z)âθ log pθ(z) + (b(z)âθ log pθ(z) â b(z)âθ log pθ(z))] = Ez [(f (z) â b(z))âθ log pθ(z)] + µb (6) (7)
We brieï¬y summarize recent stochastic gradient estimators that utilize control variates. We direct the reader to Gu et al. (2016) for further detail on these techniques.
⢠NVIL (Mnih & Gregor, 2014) uses two baselines: (1) a moving average ¯f of f to center the learning signal, and (2) an input-dependent baseline computed by a 1-layer neural network
4
Published as a conference paper at ICLR 2017
ï¬tted to f â ¯f (a control variate for the centered learning signal itself). Finally, variance normalization divides the learning signal by max(1, Ïf ), where Ï2 f is a moving average of Var[f ]. | 1611.01144#13 | Categorical Reparameterization with Gumbel-Softmax | Categorical variables are a natural choice for representing discrete
structure in the world. However, stochastic neural networks rarely use
categorical latent variables due to the inability to backpropagate through
samples. In this work, we present an efficient gradient estimator that replaces
the non-differentiable sample from a categorical distribution with a
differentiable sample from a novel Gumbel-Softmax distribution. This
distribution has the essential property that it can be smoothly annealed into a
categorical distribution. We show that our Gumbel-Softmax estimator outperforms
state-of-the-art gradient estimators on structured output prediction and
unsupervised generative modeling tasks with categorical latent variables, and
enables large speedups on semi-supervised classification. | http://arxiv.org/pdf/1611.01144 | Eric Jang, Shixiang Gu, Ben Poole | stat.ML, cs.LG | null | null | stat.ML | 20161103 | 20170805 | [
{
"id": "1602.06725"
},
{
"id": "1512.00567"
},
{
"id": "1609.01704"
}
] |
1611.01211 | 13 | 4
limyâoo #2m| yr rela if y=1 qo (1) = a= yEm| EP y'nla| if 0<y<1
The optimal policy Ï â of the model M is the policy which maximizes the average reward return, Ï â = maxÏ â P η(Ï ) where P is a set of stationary polices. Theorem 1. For a given MDP, M, with γ â [0, 1] and a catastrophe detector f , let Ï â denote any optimal policy of M, and ËÏ denote an optimal policy of M equipped with fear model F , and λ, environment (M, F ). If the probability Ï â visits the states in the danger zone is at most ϵ, and 0 ⤠R(s, a) ⤠1, then
ηM (Ï â) ⥠ηM ( ËÏ ) ⥠ηM, F ( ËÏ ) ⥠ηM (Ï â) â λϵ . (4)
In other words, ËÏ is λϵ-optimal in the original MDP. | 1611.01211#13 | Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear | Many practical environments contain catastrophic states that an optimal agent
would visit infrequently or never. Even on toy problems, Deep Reinforcement
Learning (DRL) agents tend to periodically revisit these states upon forgetting
their existence under a new policy. We introduce intrinsic fear (IF), a learned
reward shaping that guards DRL agents against periodic catastrophes. IF agents
possess a fear model trained to predict the probability of imminent
catastrophe. This score is then used to penalize the Q-learning objective. Our
theoretical analysis bounds the reduction in average return due to learning on
the perturbed objective. We also prove robustness to classification errors. As
a bonus, IF models tend to learn faster, owing to reward shaping. Experiments
demonstrate that intrinsic-fear DQNs solve otherwise pathological environments
and improve on several Atari games. | http://arxiv.org/pdf/1611.01211 | Zachary C. Lipton, Kamyar Azizzadenesheli, Abhishek Kumar, Lihong Li, Jianfeng Gao, Li Deng | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20161103 | 20180313 | [
{
"id": "1802.04412"
}
] |
1611.01224 | 13 | The recursive Retrace equation depends on the estimate Q. To compute it, in discrete action spaces, we adopt a convolutional neural network with âtwo headsâ that outputs the estimate Qθv (xt, at), as xt). This neural representation is the same as in (Mnih et al., 2016), with the well as the policy Ïθ(at| exception that we output the vector Qθv (xt, at) instead of the scalar Vθv (xt). The estimate Vθv (xt) can be easily derived by taking the expectation of Qθv under Ïθ. To approximate the policy gradient gmarg, ACER uses Qret to estimate QÏ. As Retrace uses multi- step returns, it can signiï¬cantly reduce bias in the estimation of the policy gradient 2. To learn the critic Qθv (xt, at), we again use Qret(xt, at) as a target in a mean squared error loss and update its parameters θv with the following standard gradient:
(Qret(xt, at) Qθv (xt, at)) (6)
# âθv Qθv (xt, at)).
â | 1611.01224#13 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01144 | 14 | e DARN (Gregor et al.| 2013) uses b = f(z) + fâ(Z)(z â 2), where the baseline corre- sponds to the first-order Taylor approximation of f(z) from f(z). z is chosen to be 1/2 for Bernoulli variables, which makes the estimator biased for non-quadratic f, since it ignores the correction term jy in the estimator expression.
e@ MuProp (Gu et al.||2016) also models the baseline as a first-order Taylor expansion: b = f(2) + f'(@)G = 2Z) and py = f'(Z)VoEz [z]. To overcome backpropagation through discrete sampling, a mean-field approximation fy7r(j19(z)) is used in place of f(z) to compute the baseline and derive the relevant gradients.
e VIMCO (Mnih & Rezende}|2016) is a gradient estimator for multi-sample objectives that uses the mean of other samples 6 = 1/m Vii f (z;) to construct a baseline for each sample 24 © Z1zm. We exclude VIMCO from our experiments because we are comparing estimators for single-sample objectives, although Gumbel-Softmax can be easily extended to multi- sample objectives. | 1611.01144#14 | Categorical Reparameterization with Gumbel-Softmax | Categorical variables are a natural choice for representing discrete
structure in the world. However, stochastic neural networks rarely use
categorical latent variables due to the inability to backpropagate through
samples. In this work, we present an efficient gradient estimator that replaces
the non-differentiable sample from a categorical distribution with a
differentiable sample from a novel Gumbel-Softmax distribution. This
distribution has the essential property that it can be smoothly annealed into a
categorical distribution. We show that our Gumbel-Softmax estimator outperforms
state-of-the-art gradient estimators on structured output prediction and
unsupervised generative modeling tasks with categorical latent variables, and
enables large speedups on semi-supervised classification. | http://arxiv.org/pdf/1611.01144 | Eric Jang, Shixiang Gu, Ben Poole | stat.ML, cs.LG | null | null | stat.ML | 20161103 | 20170805 | [
{
"id": "1602.06725"
},
{
"id": "1512.00567"
},
{
"id": "1609.01704"
}
] |
1611.01211 | 14 | Proof. The policy Ï â visits the fear zone with probability at most ϵ. Therefore, applying Ï â on the envi- ronment with intrinsic fear (M, F ), provides a expected return of at least ηM (Ï â) â ϵλ. Since there exists a policy with this expected return on (M, F ), therefore, the optimal policy of (M, F ), must result in an expected return of at least ηM (Ï â) â ϵλ on (M, F ), i.e. ηM, F ( ËÏ ) ⥠ηM (Ï â) â ϵλ. The expected return ηM, F ( ËÏ ) decomposes into two parts: (i) the expected return from original environment M, ηM ( ËÏ ), (ii) the expected return from the fear model. If ËÏ visits the fear zone with probability at most Ëϵ, then ηM, F ( ËÏ ) ⥠ηM ( ËÏ ) â λ Ëϵ. Therefore, applying ËÏ on M promises an expected return of at least ηM (Ï â) â ϵλ + Ëϵλ, lower bounded by ηM (Ï â) â ϵλ. | 1611.01211#14 | Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear | Many practical environments contain catastrophic states that an optimal agent
would visit infrequently or never. Even on toy problems, Deep Reinforcement
Learning (DRL) agents tend to periodically revisit these states upon forgetting
their existence under a new policy. We introduce intrinsic fear (IF), a learned
reward shaping that guards DRL agents against periodic catastrophes. IF agents
possess a fear model trained to predict the probability of imminent
catastrophe. This score is then used to penalize the Q-learning objective. Our
theoretical analysis bounds the reduction in average return due to learning on
the perturbed objective. We also prove robustness to classification errors. As
a bonus, IF models tend to learn faster, owing to reward shaping. Experiments
demonstrate that intrinsic-fear DQNs solve otherwise pathological environments
and improve on several Atari games. | http://arxiv.org/pdf/1611.01211 | Zachary C. Lipton, Kamyar Azizzadenesheli, Abhishek Kumar, Lihong Li, Jianfeng Gao, Li Deng | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20161103 | 20180313 | [
{
"id": "1802.04412"
}
] |
1611.01224 | 14 | (Qret(xt, at) Qθv (xt, at)) (6)
# âθv Qθv (xt, at)).
â
Because Retrace is return-based, it also enables faster learning of the critic. Thus the purpose of the multi-step estimator Qret in our setting is twofold: to reduce bias in the policy gradient, and to enable faster learning of the critic, hence further reducing bias.
IMPORTANCE WEIGHT TRUNCATION WITH BIAS CORRECTION
The marginal importance weights in Equation (4) can become large, thus causing instability. To safe-guard against high variance, we propose to truncate the importance weights and introduce a correction term via the following decomposition of gmarg: gmarg = Extat [Ïtâθlog Ïθ(at| Eat[¯Ïtâθlog Ïθ(at| | 1611.01224#14 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01144 | 15 | 3.3 SEMI-SUPERVISED GENERATIVE MODELS
Semi-supervised learning considers the problem of learning from both labeled data (x, y) â¼ DL and unlabeled data x â¼ DU , where x are observations (i.e. images) and y are corresponding labels (e.g. semantic class). For semi-supervised classiï¬cation, Kingma et al. (2014) propose a variational autoencoder (VAE) whose latent state is the joint distribution over a Gaussian âstyleâ variable z and a categorical âsemantic classâ variable y (Figure 6, Appendix). The VAE objective trains a discriminative network qÏ(y|x), inference network qÏ(z|x, y), and generative network pθ(x|y, z) end-to-end by maximizing a variational lower bound on the log-likelihood of the observation under the generative model. For labeled data, the class y is observed, so inference is only done on z â¼ q(z|x, y). The variational lower bound on labeled data is given by: | 1611.01144#15 | Categorical Reparameterization with Gumbel-Softmax | Categorical variables are a natural choice for representing discrete
structure in the world. However, stochastic neural networks rarely use
categorical latent variables due to the inability to backpropagate through
samples. In this work, we present an efficient gradient estimator that replaces
the non-differentiable sample from a categorical distribution with a
differentiable sample from a novel Gumbel-Softmax distribution. This
distribution has the essential property that it can be smoothly annealed into a
categorical distribution. We show that our Gumbel-Softmax estimator outperforms
state-of-the-art gradient estimators on structured output prediction and
unsupervised generative modeling tasks with categorical latent variables, and
enables large speedups on semi-supervised classification. | http://arxiv.org/pdf/1611.01144 | Eric Jang, Shixiang Gu, Ben Poole | stat.ML, cs.LG | null | null | stat.ML | 20161103 | 20170805 | [
{
"id": "1602.06725"
},
{
"id": "1512.00567"
},
{
"id": "1609.01704"
}
] |
1611.01211 | 15 | It is worth noting that the theorem holds for any optimal policy of M. If one of them does not visit the fear zone at all (i.e., ϵ = 0), then ηM (Ï â) = ηM, F ( ËÏ ) and the fear signal can boost up the process of learning the optimal policy.
Since we empirically learn the fear model F using collected data of some finite sample size N , our RL agent has access to an imperfect fear model ËF , and therefore, computes the optimal policy based on ËF . In this case, the RL agent trains with intrinsic fear generated by ËF , learning a different value function than the RL agent with perfect F . To show the robustness against errors in ËF , we are interested in the average deviation in the value functions of the two agents.
Our second main theoretical result, given in Theorem 2, allows the RL agent to use a smaller discount factor, denoted γpl an, than the actual one (γpl an ⤠γ ), to reduce the planning horizon and computation cost. Moreover, when an estimated model of the environment is used, Jiang et al. [2015] shows that using a smaller discount factor for planning may prevent over-fitting to the estimated model. Our result demonstrates that using a smaller discount factor for planning can reduce reduction of expected return when an estimated fear model is used. | 1611.01211#15 | Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear | Many practical environments contain catastrophic states that an optimal agent
would visit infrequently or never. Even on toy problems, Deep Reinforcement
Learning (DRL) agents tend to periodically revisit these states upon forgetting
their existence under a new policy. We introduce intrinsic fear (IF), a learned
reward shaping that guards DRL agents against periodic catastrophes. IF agents
possess a fear model trained to predict the probability of imminent
catastrophe. This score is then used to penalize the Q-learning objective. Our
theoretical analysis bounds the reduction in average return due to learning on
the perturbed objective. We also prove robustness to classification errors. As
a bonus, IF models tend to learn faster, owing to reward shaping. Experiments
demonstrate that intrinsic-fear DQNs solve otherwise pathological environments
and improve on several Atari games. | http://arxiv.org/pdf/1611.01211 | Zachary C. Lipton, Kamyar Azizzadenesheli, Abhishek Kumar, Lihong Li, Jianfeng Gao, Li Deng | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20161103 | 20180313 | [
{
"id": "1802.04412"
}
] |
1611.01224 | 15 | where ¯Ït = min Ït(a) = Ï(a|xt) expectations are with respect to the limiting state distribution under the behavior policy: xt â¼ at â¼ The clipping of the importance weight in the ï¬rst term of equation (7) ensures that the variance of the gradient estimate is bounded. The correction term (second term in equation (7)) ensures that our estimate is unbiased. Note that the correction term is only active for actions such that Ït(a) > c. In particular, if we choose a large value for c, the correction term only comes into effect when the variance of the original off-policy estimator of equation (4) is very high. When this happens, our decomposition has the nice property that the truncated weight in the ï¬rst term is at most c while the correction weight
Ït(a)âc Ït(a) in the second term is at most 1.
+
We model QÏ(xt, a) in the correction term with our neural network approximation Qθv (xt, at). This modiï¬cation results in what we call the truncation with bias correction trick, in this case applied to the function
# âθ log Ïθ(at| | 1611.01224#15 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01144 | 16 | log pθ(x, y) ⥠âL(x, y) = Ezâ¼qÏ(z|x,y) [log pθ(x|y, z)] â KL[q(z|x, y)||pθ(y)p(z)]
For unlabeled data, difï¬culties arise because the categorical distribution is not reparameterizable. Kingma et al. (2014) approach this by marginalizing out y over all classes, so that for unlabeled data, inference is still on qÏ(z|x, y) for each y. The lower bound on unlabeled data is:
log po() 2 âU(x) = Eznqg(y,z|x) [log pa(zly, z) + log po(y) + log p(z) â ga(y,2|z)] (9) = YE aolyle)(-L(w,y) + H(ag(ula))) (10) y
The full maximization objective is:
J = E(x,y)â¼DL [âL(x, y)] + Exâ¼DU [âU(x)] + α · E(x,y)â¼DL[log qÏ(y|x)] (11) | 1611.01144#16 | Categorical Reparameterization with Gumbel-Softmax | Categorical variables are a natural choice for representing discrete
structure in the world. However, stochastic neural networks rarely use
categorical latent variables due to the inability to backpropagate through
samples. In this work, we present an efficient gradient estimator that replaces
the non-differentiable sample from a categorical distribution with a
differentiable sample from a novel Gumbel-Softmax distribution. This
distribution has the essential property that it can be smoothly annealed into a
categorical distribution. We show that our Gumbel-Softmax estimator outperforms
state-of-the-art gradient estimators on structured output prediction and
unsupervised generative modeling tasks with categorical latent variables, and
enables large speedups on semi-supervised classification. | http://arxiv.org/pdf/1611.01144 | Eric Jang, Shixiang Gu, Ben Poole | stat.ML, cs.LG | null | null | stat.ML | 20161103 | 20170805 | [
{
"id": "1602.06725"
},
{
"id": "1512.00567"
},
{
"id": "1609.01704"
}
] |
1611.01211 | 16 | Ï â F1,γ1
F2,γ2
(s), s â S, denote Specifically, for a given environment, with fear model F1 and discount factor γ1, let V the state value function under the optimal policy of an environment with fear model F2 and the discount factor γ2. In the same environment, let ÏÏ (s) denote the visitation distribution over states under policy Ï . We are interested in the average reduction on expected return caused by an imperfect classifier; this
5
reduction, denoted L(F, F, Y>Yplan), is defined as
(1-y) I. ora vf â(s)-Vp sneco a
Theorem 2. Suppose Ypian < y, and é ⬠(0, 1). Let F be the fear model in Â¥ with minimum empirical risk on N samples. For a given MDP model, the average reduction on expected return, L(F,F,Y,Ypian), vanishes as N increase: with probability at least 1â 6,
L = O λ 1 â γ 1 â γpl an VC(F ) + log 1 δ N + (γ â γpl an) 1 â γpl an , (5)
where VC(F ) is the VC dimension of the hypothesis class F . | 1611.01211#16 | Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear | Many practical environments contain catastrophic states that an optimal agent
would visit infrequently or never. Even on toy problems, Deep Reinforcement
Learning (DRL) agents tend to periodically revisit these states upon forgetting
their existence under a new policy. We introduce intrinsic fear (IF), a learned
reward shaping that guards DRL agents against periodic catastrophes. IF agents
possess a fear model trained to predict the probability of imminent
catastrophe. This score is then used to penalize the Q-learning objective. Our
theoretical analysis bounds the reduction in average return due to learning on
the perturbed objective. We also prove robustness to classification errors. As
a bonus, IF models tend to learn faster, owing to reward shaping. Experiments
demonstrate that intrinsic-fear DQNs solve otherwise pathological environments
and improve on several Atari games. | http://arxiv.org/pdf/1611.01211 | Zachary C. Lipton, Kamyar Azizzadenesheli, Abhishek Kumar, Lihong Li, Jianfeng Gao, Li Deng | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20161103 | 20180313 | [
{
"id": "1802.04412"
}
] |
1611.01224 | 16 | # âθ log Ïθ(at|
Cc ge = #, | lpiVae To (ara )Q" (we, ay) | +E (Me Volog mo(alx1)Qo, (rt, a (8) +
Equation involves an expectation over the stationary distribution of the Markov process. We can however approximate it by sampling trajectories {x9, a0, 10, H(-|Z0),++* Tk, Ak, Th MC|Ze) f
x0, a0, r0, µ( {
} 2An alternative to Retrace here is Q(λ) with off-policy corrections (Harutyunyan et al., 2016) which we
|
· ·
discuss in more detail in Appendix B.
4
|
Published as a conference paper at ICLR 2017
xt) are the policy vectors. Given these
generated from the behavior policy µ. Here the terms µ( ·| trajectories, we can compute the off-policy ACER gradient: = ¯Ïtâθ log Ïθ(at| Ït(a) â Ït(a)
# gacer t | 1611.01224#16 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01144 | 17 | where α is the scalar trade-off between the generative and discriminative objectives.
One limitation of this approach is that marginalization over all k class values becomes prohibitively expensive for models with a large number of classes. If D, I, G are the computational cost of sam- pling from qÏ(y|x), qÏ(z|x, y), and pθ(x|y, z) respectively, then training the unsupervised objective requires O(D + k(I + G)) for each forward/backward step. In contrast, Gumbel-Softmax allows us to backpropagate through y â¼ qÏ(y|x) for single sample gradient estimation, and achieves a cost of O(D + I + G) per training step. Experimental comparisons in training speed are shown in Figure 5.
# 4 EXPERIMENTAL RESULTS
In our ï¬rst set of experiments, we compare Gumbel-Softmax and ST Gumbel-Softmax to other stochastic gradient estimators: Score-Function (SF), DARN, MuProp, Straight-Through (ST), and
5
(8)
Published as a conference paper at ICLR 2017 | 1611.01144#17 | Categorical Reparameterization with Gumbel-Softmax | Categorical variables are a natural choice for representing discrete
structure in the world. However, stochastic neural networks rarely use
categorical latent variables due to the inability to backpropagate through
samples. In this work, we present an efficient gradient estimator that replaces
the non-differentiable sample from a categorical distribution with a
differentiable sample from a novel Gumbel-Softmax distribution. This
distribution has the essential property that it can be smoothly annealed into a
categorical distribution. We show that our Gumbel-Softmax estimator outperforms
state-of-the-art gradient estimators on structured output prediction and
unsupervised generative modeling tasks with categorical latent variables, and
enables large speedups on semi-supervised classification. | http://arxiv.org/pdf/1611.01144 | Eric Jang, Shixiang Gu, Ben Poole | stat.ML, cs.LG | null | null | stat.ML | 20161103 | 20170805 | [
{
"id": "1602.06725"
},
{
"id": "1512.00567"
},
{
"id": "1609.01704"
}
] |
1611.01211 | 17 | where VC(F ) is the VC dimension of the hypothesis class F .
Proof. In order to analyze [ir - Vey en 0}. which is always non-negative, we decompose it as follows:
(6) F.Yplan (veer - VER) + [v rey (=v nan)
The first term is the difference in the expected returns of Ï â from s: F,γ under two different discount factors, starting
ELD! = Yptan)Â¥els0 = sy Fl . (7) t=0
γ âγpl an (1âγpl an )(1âγ ) .
Since rt ⤠1, ât, using the geometric series, Eq. 7 is upper bounded by 1
=
# 1 1âγpl an
1âγ â | 1611.01211#17 | Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear | Many practical environments contain catastrophic states that an optimal agent
would visit infrequently or never. Even on toy problems, Deep Reinforcement
Learning (DRL) agents tend to periodically revisit these states upon forgetting
their existence under a new policy. We introduce intrinsic fear (IF), a learned
reward shaping that guards DRL agents against periodic catastrophes. IF agents
possess a fear model trained to predict the probability of imminent
catastrophe. This score is then used to penalize the Q-learning objective. Our
theoretical analysis bounds the reduction in average return due to learning on
the perturbed objective. We also prove robustness to classification errors. As
a bonus, IF models tend to learn faster, owing to reward shaping. Experiments
demonstrate that intrinsic-fear DQNs solve otherwise pathological environments
and improve on several Atari games. | http://arxiv.org/pdf/1611.01211 | Zachary C. Lipton, Kamyar Azizzadenesheli, Abhishek Kumar, Lihong Li, Jianfeng Gao, Li Deng | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20161103 | 20180313 | [
{
"id": "1802.04412"
}
] |
1611.01224 | 17 | # gacer t
prVo log mo(arlae)[Qâ¢"(ae, ar) â Vo, (a2)] +E, (5] Vo log mo(a|21)[Qo, (1, a) â wo.ce) : (9)
In the above expression, we have subtracted the classical baseline Vθv (xt) to reduce variance. , (9) recovers (off-policy) policy gradient up to the use It is interesting to note that, when c = of Retrace. When c = 0, (9) recovers an actor critic update that depends entirely on Q estimates. In the continuous control domain, (9) also generalizes Stochastic Value Gradients if c = 0 and the reparametrization trick is used to estimate its second term (Heess et al., 2015).
3.3 EFFICIENT TRUST REGION POLICY OPTIMIZATION
The policy updates of actor-critic methods do often exhibit high variance. Hence, to ensure stability, we must limit the per-step changes to the policy. Simply using smaller learning rates is insufï¬cient as they cannot guard against the occasional large updates while maintaining a desired learning speed. Trust Region Policy Optimization (TRPO) (Schulman et al., 2015a) provides a more adequate solution. | 1611.01224#17 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01144 | 18 | 5
(8)
Published as a conference paper at ICLR 2017
Slope-Annealed ST. Each estimator is evaluated on two tasks: (1) structured output prediction and (2) variational training of generative models. We use the MNIST dataset with ï¬xed binarization for training and evaluation, which is common practice for evaluating stochastic gradient estimators (Salakhutdinov & Murray, 2008; Larochelle & Murray, 2011).
Learning rates are chosen from {3eâ5, 1eâ5, 3eâ4, 1eâ4, 3eâ3, 1eâ3}; we select the best learn- ing rate for each estimator using the MNIST validation set, and report performance on the test set. Samples drawn from the Gumbel-Softmax distribution are continuous during training, but are discretized to one-hot vectors during evaluation. We also found that variance normalization was nec- essary to obtain competitive performance for SF, DARN, and MuProp. We used sigmoid activation functions for binary (Bernoulli) neural networks and softmax activations for categorical variables. Models were trained using stochastic gradient descent with momentum 0.9.
4.1 STRUCTURED OUTPUT PREDICTION WITH STOCHASTIC BINARY NETWORKS | 1611.01144#18 | Categorical Reparameterization with Gumbel-Softmax | Categorical variables are a natural choice for representing discrete
structure in the world. However, stochastic neural networks rarely use
categorical latent variables due to the inability to backpropagate through
samples. In this work, we present an efficient gradient estimator that replaces
the non-differentiable sample from a categorical distribution with a
differentiable sample from a novel Gumbel-Softmax distribution. This
distribution has the essential property that it can be smoothly annealed into a
categorical distribution. We show that our Gumbel-Softmax estimator outperforms
state-of-the-art gradient estimators on structured output prediction and
unsupervised generative modeling tasks with categorical latent variables, and
enables large speedups on semi-supervised classification. | http://arxiv.org/pdf/1611.01144 | Eric Jang, Shixiang Gu, Ben Poole | stat.ML, cs.LG | null | null | stat.ML | 20161103 | 20170805 | [
{
"id": "1602.06725"
},
{
"id": "1512.00567"
},
{
"id": "1609.01704"
}
] |
1611.01211 | 18 | Since rt ⤠1, ât, using the geometric series, Eq. 7 is upper bounded by 1
=
# 1 1âγpl an
1âγ â
1 Y-Yplan ce Since r; < 1, Vt, using the geometric series, Eq. 7 is upper bounded by ry is an optimal policy of an PF, . The second term is upper bounded by V, F, Pte (s) âV,. yee (s) since 7, Yol , sYÂ¥plan environment equipped with (F, Yplan)- Furthermore, as Ypian S y andr; > 0, we have Ve. vee (s) = F.Yplan F.Ypla the deviation of the value function under two different close policies. Since F and F are close, we expect that this deviation to be small. With one more decomposition step a a V,, plan (s). Therefore, the second term of Eq. 6 is upper bounded by Vay ete (s )-V, â "nn ), which is sÂ¥plan
+ | 1611.01211#18 | Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear | Many practical environments contain catastrophic states that an optimal agent
would visit infrequently or never. Even on toy problems, Deep Reinforcement
Learning (DRL) agents tend to periodically revisit these states upon forgetting
their existence under a new policy. We introduce intrinsic fear (IF), a learned
reward shaping that guards DRL agents against periodic catastrophes. IF agents
possess a fear model trained to predict the probability of imminent
catastrophe. This score is then used to penalize the Q-learning objective. Our
theoretical analysis bounds the reduction in average return due to learning on
the perturbed objective. We also prove robustness to classification errors. As
a bonus, IF models tend to learn faster, owing to reward shaping. Experiments
demonstrate that intrinsic-fear DQNs solve otherwise pathological environments
and improve on several Atari games. | http://arxiv.org/pdf/1611.01211 | Zachary C. Lipton, Kamyar Azizzadenesheli, Abhishek Kumar, Lihong Li, Jianfeng Gao, Li Deng | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20161103 | 20180313 | [
{
"id": "1802.04412"
}
] |
1611.01224 | 18 | Schulman et al. (2015a) approximately limit the difference between the updated policy and the current policy to ensure safety. Despite the effectiveness of their TRPO method, it requires repeated computation of Fisher-vector products for each update. This can prove to be prohibitively expensive in large domains.
In this section we introduce a new trust region policy optimization method that scales well to large problems. Instead of constraining the updated policy to be close to the current policy (as in TRPO), we propose to maintain an average policy network that represents a running average of past policies and forces the updated policy to not deviate far from this average.
We decompose our policy network in two parts: a distribution f , and a deep neural network that gen- erates the statistics Ïθ(x) of this distribution. That is, given f , the policy is completely characterized by the network Ïθ: Ï( Ïθ(x)). For example, in the discrete domain, we choose f to be the categorical distribution with a probability vector Ïθ(x) as its statistics. The probability vector is of course parameterised by θ. | 1611.01224#18 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01144 | 19 | 4.1 STRUCTURED OUTPUT PREDICTION WITH STOCHASTIC BINARY NETWORKS
The objective of structured output prediction is to predict the lower half of a 28 x 28 MNIST digit given the top half of the image (14 x 28). This is acommon benchmark for training stochastic binary networks (SBN) (Raiko et al.| 2014} Gu et al.| 2016} Mnih & Rezende| 2016). The minimization objective for this conditional generative model is an importance-sampled estimate of the likelihood objective, Ej, po (is|:rper) [2 2, log po (aiower|ti)], where m = 1 is used for training and m = 1000 is used for evaluation.
We trained a SBN with two hidden layers of 200 units each. This corresponds to either 200 Bernoulli variables (denoted as 392-200-200-392) or 20 categorical variables (each with 10 classes) with bi- narized activations (denoted as 392-(20 Ã 10)-(20 Ã 10)-392). | 1611.01144#19 | Categorical Reparameterization with Gumbel-Softmax | Categorical variables are a natural choice for representing discrete
structure in the world. However, stochastic neural networks rarely use
categorical latent variables due to the inability to backpropagate through
samples. In this work, we present an efficient gradient estimator that replaces
the non-differentiable sample from a categorical distribution with a
differentiable sample from a novel Gumbel-Softmax distribution. This
distribution has the essential property that it can be smoothly annealed into a
categorical distribution. We show that our Gumbel-Softmax estimator outperforms
state-of-the-art gradient estimators on structured output prediction and
unsupervised generative modeling tasks with categorical latent variables, and
enables large speedups on semi-supervised classification. | http://arxiv.org/pdf/1611.01144 | Eric Jang, Shixiang Gu, Ben Poole | stat.ML, cs.LG | null | null | stat.ML | 20161103 | 20170805 | [
{
"id": "1602.06725"
},
{
"id": "1512.00567"
},
{
"id": "1609.01704"
}
] |
1611.01211 | 19 | +
Yplan Poi _ plan F.Yptan Vea lee OW) = [Vet (= Vp ra") F.Ypian +(v. Prptan (gy v, Pptan(s)) 4 Yplan F.Yplan Yplan F.Yplan s[efinera- rf)
.
Since the middle term in this equation is non-positive, we can ignore it for the purpose of upper-bounding the left-hand side. The upper bound is sum of the remaining two terms which is also upper bounded by 2
6
times of the maximum of them;
2 max vz (s)- VE me{ap, ty | FeYptan âYplan *Yplanâ F.Ypian (s)} ,
which is the deviation in values of different domains. The value functions satisfy the Bellman equation for any Ï :
Vin Ypta (8) =R(s, 2(s)) + AF(s) +Yptan [T(S8. 2 VE yyy 9048 VE vptan (s) Ris, x) + AF(s) © +Yplan | T(s'|s, n(s))VZ (sâ)ds ° seS >Yplan | 1611.01211#19 | Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear | Many practical environments contain catastrophic states that an optimal agent
would visit infrequently or never. Even on toy problems, Deep Reinforcement
Learning (DRL) agents tend to periodically revisit these states upon forgetting
their existence under a new policy. We introduce intrinsic fear (IF), a learned
reward shaping that guards DRL agents against periodic catastrophes. IF agents
possess a fear model trained to predict the probability of imminent
catastrophe. This score is then used to penalize the Q-learning objective. Our
theoretical analysis bounds the reduction in average return due to learning on
the perturbed objective. We also prove robustness to classification errors. As
a bonus, IF models tend to learn faster, owing to reward shaping. Experiments
demonstrate that intrinsic-fear DQNs solve otherwise pathological environments
and improve on several Atari games. | http://arxiv.org/pdf/1611.01211 | Zachary C. Lipton, Kamyar Azizzadenesheli, Abhishek Kumar, Lihong Li, Jianfeng Gao, Li Deng | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20161103 | 20180313 | [
{
"id": "1802.04412"
}
] |
1611.01224 | 19 | We denote the average policy network as Ïθa and update its parameters θa âsoftlyâ after each update to the policy parameter θ: θa â Consider, for example, the ACER policy gradient as deï¬ned in Equation (9), but with respect to Ï: Ïθ(x))[Qret(xt, at)
â
# gacer t
= PtV o(a:) log f(aildo(x))[Qâ¢" (xe, at) _ Vo, (x1)] pila) ⢠5 ' +E | V g(a.) log f(at|b0(x)) (Qe, (4,4) â Vo,(xe)| }- 10) ann pr(a) +
Given the averaged policy network, our proposed trust region update involves two stages. In the ï¬rst stage, we solve the following optimization problem with a linearized KL divergence constraint:
Inimi 1 aacer 2 minimize <= -â2Z Ir 2 ls lla an) subject to Vee) Decx [F(-1o. (1))IIFClea(@e))]" 2 <6 | 1611.01224#19 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01144 | 20 | As shown in Figure 3, ST Gumbel-Softmax is on par with the other estimators for Bernoulli vari- ables and outperforms on categorical variables. Meanwhile, Gumbel-Softmax outperforms other estimators on both Bernoulli and Categorical variables. We found that it was not necessary to anneal the softmax temperature for this task, and used a ï¬xed Ï = 1.
(a) (b)
Figure 3: Test loss (negative log-likelihood) on the structured output prediction task with binarized MNIST using a stochastic binary network with (a) Bernoulli latent variables (392-200-200-392) and (b) categorical latent variables (392-(20 Ã 10)-(20 Ã 10)-392).
4.2 GENERATIVE MODELING WITH VARIATIONAL AUTOENCODERS | 1611.01144#20 | Categorical Reparameterization with Gumbel-Softmax | Categorical variables are a natural choice for representing discrete
structure in the world. However, stochastic neural networks rarely use
categorical latent variables due to the inability to backpropagate through
samples. In this work, we present an efficient gradient estimator that replaces
the non-differentiable sample from a categorical distribution with a
differentiable sample from a novel Gumbel-Softmax distribution. This
distribution has the essential property that it can be smoothly annealed into a
categorical distribution. We show that our Gumbel-Softmax estimator outperforms
state-of-the-art gradient estimators on structured output prediction and
unsupervised generative modeling tasks with categorical latent variables, and
enables large speedups on semi-supervised classification. | http://arxiv.org/pdf/1611.01144 | Eric Jang, Shixiang Gu, Ben Poole | stat.ML, cs.LG | null | null | stat.ML | 20161103 | 20170805 | [
{
"id": "1602.06725"
},
{
"id": "1512.00567"
},
{
"id": "1609.01704"
}
] |
1611.01211 | 20 | which can be solved using iterative updates of dynamic programing. Let V Ï i (s) respectably denote the iâth iteration of the dynamic programmings corresponding to the first and second equalities in Eq. 8. Therefore, for any state
Vi(8)-Vi"(s) = A/F(s) - AâF(s) +Yptan J T(s'5.2(9)) (Vi-us") â Hits?) ds ay YptanT")" (F~F) (3), (10)
where (J 7)â is a kernel and denotes the transition operator applied i times to itself. The classification error ~ con ~ |Fs) - Fis) is the zero-one loss of binary classifier, therefore, its expectation hes w@ "plan (s) |F«s) - Fis ds is bounded by 3200 Lo Hoes linear operator, with probability at least 1 â 6 [32, 12]. As long as the operator (J)! is a
3200 VC(F) + log 5 fer Fyptan (§) |v7(s) - Vi (s)\ds < y_ 2200 NOG) NES )+}e8 5 : (11) seS âYplan N | 1611.01211#20 | Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear | Many practical environments contain catastrophic states that an optimal agent
would visit infrequently or never. Even on toy problems, Deep Reinforcement
Learning (DRL) agents tend to periodically revisit these states upon forgetting
their existence under a new policy. We introduce intrinsic fear (IF), a learned
reward shaping that guards DRL agents against periodic catastrophes. IF agents
possess a fear model trained to predict the probability of imminent
catastrophe. This score is then used to penalize the Q-learning objective. Our
theoretical analysis bounds the reduction in average return due to learning on
the perturbed objective. We also prove robustness to classification errors. As
a bonus, IF models tend to learn faster, owing to reward shaping. Experiments
demonstrate that intrinsic-fear DQNs solve otherwise pathological environments
and improve on several Atari games. | http://arxiv.org/pdf/1611.01211 | Zachary C. Lipton, Kamyar Azizzadenesheli, Abhishek Kumar, Lihong Li, Jianfeng Gao, Li Deng | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20161103 | 20180313 | [
{
"id": "1802.04412"
}
] |
1611.01224 | 20 | Ïθa (xt)) ·| Since the constraint is linear, the overall optimization problem reduces to a simple quadratic program- ming problem, the solution of which can be easily derived in closed form using the KKT conditions. Letting k =
Vg(0,) Dx [f(-16o, (e+) || i at = Get â
|
i KT gacet _ 5 at = Get â max {0, ao" z \ (12) Walla
This transformation of the gradient has a very natural form. If the constraint is satisfied, there is no change to the gradient with respect to ¢g(x,). Otherwise, the update is scaled down in the direction
5
Published as a conference paper at ICLR 2017
1 on-policy + 0 replay (A3C) 1 on-policy + 1 replay (ACER) 1 on-policy + 4 replay (ACER) 1 on-policy + 8 replay (ACER) DON Prioritized Replay Median (in Human) Median (in Human) ~ Million Steps | 1611.01224#20 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01144 | 21 | 4.2 GENERATIVE MODELING WITH VARIATIONAL AUTOENCODERS
We train variational autoencoders (Kingma & Welling, 2013), where the objective is to learn a gener- ative model of binary MNIST images. In our experiments, we modeled the latent variable as a single hidden layer with 200 Bernoulli variables or 20 categorical variables (20Ã10). We use a learned cat- egorical prior rather than a Gumbel-Softmax prior in the training objective. Thus, the minimization objective during training is no longer a variational bound if the samples are not discrete. In practice,
6
Published as a conference paper at ICLR 2017
we ï¬nd that optimizing this objective in combination with temperature annealing still minimizes actual variational bounds on validation and test sets. Like the structured output prediction task, we use a multi-sample bound for evaluation with m = 1000.
The temperature is annealed using the schedule Ï = max(0.5, exp(ârt)) of the global training step t, where Ï is updated every N steps. N â {500, 1000} and r â {1eâ5, 1eâ4} are hyperparameters for which we select the best-performing estimator on the validation set and report test performance. | 1611.01144#21 | Categorical Reparameterization with Gumbel-Softmax | Categorical variables are a natural choice for representing discrete
structure in the world. However, stochastic neural networks rarely use
categorical latent variables due to the inability to backpropagate through
samples. In this work, we present an efficient gradient estimator that replaces
the non-differentiable sample from a categorical distribution with a
differentiable sample from a novel Gumbel-Softmax distribution. This
distribution has the essential property that it can be smoothly annealed into a
categorical distribution. We show that our Gumbel-Softmax estimator outperforms
state-of-the-art gradient estimators on structured output prediction and
unsupervised generative modeling tasks with categorical latent variables, and
enables large speedups on semi-supervised classification. | http://arxiv.org/pdf/1611.01144 | Eric Jang, Shixiang Gu, Ben Poole | stat.ML, cs.LG | null | null | stat.ML | 20161103 | 20170805 | [
{
"id": "1602.06725"
},
{
"id": "1512.00567"
},
{
"id": "1609.01704"
}
] |
1611.01211 | 21 | Therefore, L£(F, F, Y:Yplan) is bounded by (1 â y) times of sum of Eq. 11 and ~~, with probability at least 1â 6. 7
Theorem 2 holds for both finite and continuous state-action MDPs. Over the course of our experiments, we discovered the following pattern: Intrinsic fear models are more effective when the fear radius kr is large enough that the model can experience danger states at a safe distance and correct the policy, without experiencing many catastrophes. When the fear radius is too small, the danger probability is only nonzero at states from which catastrophes are inevitable anyway and intrinsic fear seems not to help. We also found that wider fear factors train more stably when phased in over the course of many episodes. So, in all of our experiments we gradually phase in the fear factor from 0 to λ reaching full strength at predetermined time step kλ.
7
# 4 Environments
We demonstrate our algorithms on the following environments: (i) Adventure Seeker, a toy pathological environment that we designed to demonstrate catastrophic forgetting; (ii) Cartpole, a classic RL environment; and (ii) the Atari games Seaquest, Asteroids, and Freeway [3]. | 1611.01211#21 | Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear | Many practical environments contain catastrophic states that an optimal agent
would visit infrequently or never. Even on toy problems, Deep Reinforcement
Learning (DRL) agents tend to periodically revisit these states upon forgetting
their existence under a new policy. We introduce intrinsic fear (IF), a learned
reward shaping that guards DRL agents against periodic catastrophes. IF agents
possess a fear model trained to predict the probability of imminent
catastrophe. This score is then used to penalize the Q-learning objective. Our
theoretical analysis bounds the reduction in average return due to learning on
the perturbed objective. We also prove robustness to classification errors. As
a bonus, IF models tend to learn faster, owing to reward shaping. Experiments
demonstrate that intrinsic-fear DQNs solve otherwise pathological environments
and improve on several Atari games. | http://arxiv.org/pdf/1611.01211 | Zachary C. Lipton, Kamyar Azizzadenesheli, Abhishek Kumar, Lihong Li, Jianfeng Gao, Li Deng | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20161103 | 20180313 | [
{
"id": "1802.04412"
}
] |
1611.01224 | 21 | Figure 1: ACER improvements in sample (LEFT) and computation (RIGHT) complexity on Atari. On each plot, the median of the human-normalized score across all 57 Atari games is presented for 4 ratios of replay with 0 replay corresponding to on-policy A3C. The colored solid and dashed lines represent ACER with and without trust region updating respectively. The environment steps are counted over all threads. The gray curve is the original DQN agent (Mnih et al., 2015) and the black curve is one of the Prioritized Double DQN agents from Schaul et al. (2016).
of k, thus effectively lowering rate of change between the activations of the current policy and the average policy network.
In the second stage, we take advantage of back-propagation. Speciï¬cally, the updated gradient with respect to Ïθ, that is zâ, is back-propagated through the network to compute the derivatives with respect to the parameters. The parameter updates for the policy network follow from the chain rule: âÏθ(x)
âθ zâ.
The trust region step is carried out in the space of the statistics of the distribution f , and not in the space of the policy parameters. This is done deliberately so as to avoid an additional back-propagation step through the policy network. | 1611.01224#21 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01144 | 22 | As shown in Figure 4, ST Gumbel-Softmax outperforms other estimators for Categorical variables, and Gumbel-Softmax drastically outperforms other estimators in both Bernoulli and Categorical variables.
# Bound (nats)
(a) (b)
Figure 4: Test loss (negative variational lower bound) on binarized MNIST VAE with (a) Bernoulli latent variables (784 â 200 â 784) and (b) categorical latent variables (784 â (20 Ã 10) â 200).
Table 1: The Gumbel-Softmax estimator outperforms other estimators on Bernoulli and Categorical latent variables. For the structured output prediction (SBN) task, numbers correspond to negative log-likelihoods (nats) of input images (lower is better). For the VAE task, numbers correspond to negative variational lower bounds (nats) on the log-likelihood (lower is better). | 1611.01144#22 | Categorical Reparameterization with Gumbel-Softmax | Categorical variables are a natural choice for representing discrete
structure in the world. However, stochastic neural networks rarely use
categorical latent variables due to the inability to backpropagate through
samples. In this work, we present an efficient gradient estimator that replaces
the non-differentiable sample from a categorical distribution with a
differentiable sample from a novel Gumbel-Softmax distribution. This
distribution has the essential property that it can be smoothly annealed into a
categorical distribution. We show that our Gumbel-Softmax estimator outperforms
state-of-the-art gradient estimators on structured output prediction and
unsupervised generative modeling tasks with categorical latent variables, and
enables large speedups on semi-supervised classification. | http://arxiv.org/pdf/1611.01144 | Eric Jang, Shixiang Gu, Ben Poole | stat.ML, cs.LG | null | null | stat.ML | 20161103 | 20170805 | [
{
"id": "1602.06725"
},
{
"id": "1512.00567"
},
{
"id": "1609.01704"
}
] |
1611.01211 | 22 | Adventure Seeker We imagine a player placed on a hill, sloping upward to the right (Figure 1(a)). At each turn, the player can move to the right (up the hill) or left (down the hill). The environment adjusts the playerâs position accordingly, adding some random noise. Between the left and right edges of the hill, the player gets more reward for spending time higher on the hill. But if the player goes too far to the right, she will fall off, terminating the episode (catastrophe). Formally, the state is single continuous variable s â [0, 1.0], denoting the playerâs position. The starting position for each episode is chosen uniformly at random in the interval [.25, .75]. The available actions consist only of {â1, +1} (left and right). Given an action at in state st , T (st +1|st , at ) the successor state is produced st +1 â st + .01·at +η where η â¼ N (0, .012). The reward at each turn is st (proportional to height). The player falls off the hill, entering the catastrophic terminating state, whenever st +1 > 1.0 or st +1 < 0.0. | 1611.01211#22 | Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear | Many practical environments contain catastrophic states that an optimal agent
would visit infrequently or never. Even on toy problems, Deep Reinforcement
Learning (DRL) agents tend to periodically revisit these states upon forgetting
their existence under a new policy. We introduce intrinsic fear (IF), a learned
reward shaping that guards DRL agents against periodic catastrophes. IF agents
possess a fear model trained to predict the probability of imminent
catastrophe. This score is then used to penalize the Q-learning objective. Our
theoretical analysis bounds the reduction in average return due to learning on
the perturbed objective. We also prove robustness to classification errors. As
a bonus, IF models tend to learn faster, owing to reward shaping. Experiments
demonstrate that intrinsic-fear DQNs solve otherwise pathological environments
and improve on several Atari games. | http://arxiv.org/pdf/1611.01211 | Zachary C. Lipton, Kamyar Azizzadenesheli, Abhishek Kumar, Lihong Li, Jianfeng Gao, Li Deng | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20161103 | 20180313 | [
{
"id": "1802.04412"
}
] |
1611.01224 | 22 | We would like to remark that the algorithm advanced in this section can be thought of as a general strategy for modifying the backward messages in back-propagation so as to stabilize the activations.
Instead of a trust region update, one could alternatively add an appropriately scaled KL cost to the objective function as proposed by Heess et al. (2015). This approach, however, is less robust to the choice of hyper-parameters in our experience.
The ACER algorithm results from a combination of the above ideas, with the precise pseudo-code appearing in Appendix A. A master algorithm (Algorithm 1) calls ACER on-policy to perform updates and propose trajectories. It then calls ACER off-policy component to conduct several replay steps. When on-policy, ACER effectively becomes a modiï¬ed version of A3C where Q instead of V baselines are employed and trust region optimization is used.
# 4 RESULTS ON ATARI
We use the Arcade Learning Environment of Bellemare et al. (2013) to conduct an extensive evaluation. We deploy one single algorithm and network architecture, with ï¬xed hyper-parameters, to learn to play 57 Atari games given only raw pixel observations and game rewards. This task is highly demanding because of the diversity of games, and high-dimensional pixel-level observations. | 1611.01224#22 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01144 | 23 | SBN (Bern.) SBN (Cat.) VAE (Bern.) VAE (Cat.) SF 72.0 73.1 112.2 110.6 DARN MuProp 59.7 67.9 110.9 128.8 58.9 63.0 109.7 107.0 ST 58.9 61.8 116.0 110.9 Annealed ST Gumbel-S. 58.7 61.1 111.5 107.8 58.5 59.0 105.0 101.5
4.3 GENERATIVE SEMI-SUPERVISED CLASSIFICATION
We apply the Gumbel-Softmax estimator to semi-supervised classiï¬cation on the binary MNIST dataset. We compare the original marginalization-based inference approach (Kingma et al., 2014) to single-sample inference with Gumbel-Softmax and ST Gumbel-Softmax. | 1611.01144#23 | Categorical Reparameterization with Gumbel-Softmax | Categorical variables are a natural choice for representing discrete
structure in the world. However, stochastic neural networks rarely use
categorical latent variables due to the inability to backpropagate through
samples. In this work, we present an efficient gradient estimator that replaces
the non-differentiable sample from a categorical distribution with a
differentiable sample from a novel Gumbel-Softmax distribution. This
distribution has the essential property that it can be smoothly annealed into a
categorical distribution. We show that our Gumbel-Softmax estimator outperforms
state-of-the-art gradient estimators on structured output prediction and
unsupervised generative modeling tasks with categorical latent variables, and
enables large speedups on semi-supervised classification. | http://arxiv.org/pdf/1611.01144 | Eric Jang, Shixiang Gu, Ben Poole | stat.ML, cs.LG | null | null | stat.ML | 20161103 | 20170805 | [
{
"id": "1602.06725"
},
{
"id": "1512.00567"
},
{
"id": "1609.01704"
}
] |
1611.01211 | 23 | This game should be easy to solve. There exists a threshold above which the agent should always choose to go left and below which it should always go right. And yet a DQN agent will periodically die. Initially, the DQN quickly learns a good policy and avoids the catastrophe, but over the course of continued training, the agent, owing to the shape of the reward function, collapses to a policy which always moves right, regardless of the state. We might critically ask in what real-world scenario, we could depend upon a system that cannot solve Adventure Seeker. | 1611.01211#23 | Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear | Many practical environments contain catastrophic states that an optimal agent
would visit infrequently or never. Even on toy problems, Deep Reinforcement
Learning (DRL) agents tend to periodically revisit these states upon forgetting
their existence under a new policy. We introduce intrinsic fear (IF), a learned
reward shaping that guards DRL agents against periodic catastrophes. IF agents
possess a fear model trained to predict the probability of imminent
catastrophe. This score is then used to penalize the Q-learning objective. Our
theoretical analysis bounds the reduction in average return due to learning on
the perturbed objective. We also prove robustness to classification errors. As
a bonus, IF models tend to learn faster, owing to reward shaping. Experiments
demonstrate that intrinsic-fear DQNs solve otherwise pathological environments
and improve on several Atari games. | http://arxiv.org/pdf/1611.01211 | Zachary C. Lipton, Kamyar Azizzadenesheli, Abhishek Kumar, Lihong Li, Jianfeng Gao, Li Deng | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20161103 | 20180313 | [
{
"id": "1802.04412"
}
] |
1611.01224 | 23 | Our experimental setup uses 16 actor-learner threads running on a single machine with no GPUs. We adopt the same input pre-processing and network architecture as Mnih et al. (2015). Speciï¬cally, the network consists of a convolutional layer with 32 8 8 ï¬lters with stride 4 followed by another convolutional layer with 64 4 4 ï¬lters with stride 2, followed by a ï¬nal convolutional layer with 64 3 ï¬lters with stride 1, followed by a fully-connected layer of size 512. Each of the hidden layers 3 is followed by a rectiï¬er nonlinearity. The network outputs a softmax policy and Q values.
6
Published as a conference paper at ICLR 2017 | 1611.01224#23 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.