id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
listlengths 1
1
|
---|---|---|---|---|---|---|
1611.01368#15 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | We only considered relatively ambiguous words, in which a single POS accounted for more than 90% of the wordâ s occurrences in the corpus. Figure 2f shows that the ï¬ rst principal component corresponded almost perfectly to the expected num- ber of the noun, suggesting that the model learned the number of speciï¬ c words very well; recall that the model did not have access during training to noun number annotations or to morphological sufï¬ xes such as -s that could be used to identify plurals. | 1611.01368#14 | 1611.01368#16 | 1611.01368 | [
"1602.08952"
]
|
1611.01368#16 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | Visualizing the networkâ s activations: We start investigating the inner workings of the number pre- diction network by analyzing its activation in re- sponse to particular syntactic constructions. To sim- plify the analysis, we deviate from our practice in the rest of this paper and use constructed sentences. We ï¬ rst constructed sets of sentence preï¬ xes based on the following patterns: PP: The toy(s) of the boy(s)... RC: The toy(s) that the boy(s)... These patterns differ by exactly one function word, which determines the type of the modiï¬ er of the main clause subject: a prepositional phrase (PP) in the ï¬ rst sentence and a relative clause (RC) in the second. In PP sentences the correct number of the upcoming verb is determined by the main clause subject toy(s); in RC sentences it is determined by the embedded subject boy(s). We generated all four versions of each pattern, and repeated the process ten times with different lexical items (the house(s) of/that the girl(s), the computer(s) of/that the student(s), etc.), for a total of 80 sentences. The network made correct number predictions for all 40 PP sentences, but made three errors in RC sen- tences. We averaged the word-by-word activations across all sets of ten sentences that had the same com- bination of modiï¬ er (PP or RC), ï¬ rst noun number and second noun number. | 1611.01368#15 | 1611.01368#17 | 1611.01368 | [
"1602.08952"
]
|
1611.01368#17 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | Plots of the activation of all 50 units are provided in the Appendix (Figure 5). Figure 3a highlights a unit (Unit 1) that shows a particularly clear pattern: it tracks the number of the main clause subject throughout the PP modiï¬ er, resets when it reaches the relativizer that which intro- duces the RC modiï¬ er, and then switches to tracking the number of the embedded subject. To explore how the network deals with dependen- cies spanning a larger number of words, we tracked its activation during the processing of the following two sentences:9 The houses of/that the man from the ofï¬ ce across the street... | 1611.01368#16 | 1611.01368#18 | 1611.01368 | [
"1602.08952"
]
|
1611.01368#18 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The network made the correct prediction for the PP 9We simpliï¬ ed this experiment in light of the relative robust- ness of the ï¬ rst experiment to lexical items and to whether each of the nouns was singular or plural. (a) # oy 05 0.0 0.5 # e (b) (c) Figure 3: Word-by-word visualization of LSTM activation: (a) a unit that correctly predicts the number of an upcoming verb. This number is determined by the ï¬ rst noun (X) when the modiï¬ er is a prepositional phrase (PP) and by the second noun (Y) when it is an object relative clause (RC); (b) the evolution of the predictions in the case of a longer modiï¬ er: the predictions correctly diverge at the embedded noun, but then incorrectly converge again; (c) the activation of four representative units over the course of the same sentences. but not the RC sentence (as before, the correct pre- dictions are PLURAL for PP and SINGULAR for RC). Figure 3b shows that the network begins by mak- ing the correct prediction for RC immediately after that, but then falters: as the sentence goes on, the resetting effect of that diminishes. The activation time courses shown in Figure 3c illustrate that Unit 1, which identiï¬ ed the subject correctly when the preï¬ x was short, gradually forgets that it is in an embedded clause as the preï¬ x grows longer. By contrast, Unit 0 shows a stable capacity to remember the current embedding status. Additional representative units shown in Figure 3c are Unit 46, which consistently stores the number of the main clause subject, and Unit 27, which tracks the number of the most recent noun, resetting at noun phrase boundaries. While the interpretability of these patterns is en- couraging, our analysis only scratches the surface of the rich possibilities of a linguistically-informed analysis of a neural network trained to perform a syntax-sensitive task; we leave a more extensive in- vestigation for future work. # 5 Alternative Training Objectives The number prediction task followed a fully super- vised objective, in which the network identiï¬ es the number of an upcoming verb based only on the words preceding the verb. | 1611.01368#17 | 1611.01368#19 | 1611.01368 | [
"1602.08952"
]
|
1611.01368#19 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | This section proposes three objec- tives that modify some of the goals and assumptions of the number prediction objective (see Table 1 for an overview). Verb inï¬ ection: This objective is similar to num- ber prediction, with one difference: the network re- ceives not only the words leading up to the verb, but also the singular form of the upcoming verb (e.g., writes). In practice, then, the network needs to decide between the singular and plural forms of a particular verb (writes or write). Having access to the semantics of the verb can help the network identify the noun that serves as its subject without using the syntactic subjecthood criteria. For example, in the following sentence: | 1611.01368#18 | 1611.01368#20 | 1611.01368 | [
"1602.08952"
]
|
1611.01368#20 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | People from the capital often eat pizza. Sample input Training signal Prediction task Correct answer SINGULAR/PLURAL? SINGULAR/PLURAL? PLURAL PLURAL The keys to the cabinet [is/are] The keys to the cabinet are here. GRAMMATICAL GRAMMATICAL/UNGRAMMATICAL? GRAMMATICAL The keys to the cabinet PLURAL PLURAL P (are) > P (is)? are True Table 1: Examples of the four training objectives and corresponding prediction tasks. only people is a plausible subject for eat; the network can use this information to infer that the correct form of the verb is eat is rather than eats. This objective is similar to the task that humans face during language production: after the speaker has decided to use a particular verb (e.g., write), he or she needs to decide whether its form will be write or writes (Levelt et al., 1999; Staub, 2009). Grammaticality judgments: The previous objec- tives explicitly indicate the location in the sentence in which a verb can appear, giving the network a cue to syntactic clause boundaries. They also explicitly di- rect the networkâ s attention to the number of the verb. As a form of weaker supervision, we experimented with a grammaticality judgment objective. In this sce- nario, the network is given a complete sentence, and is asked to judge whether or not it is grammatical. attend to the number of the verb. In the network that implements this training scenario, RNN activation after each word is fed into a fully connected dense layer followed by a softmax layer over the entire vocabulary. We evaluate the knowledge that the network has acquired about subject-verb noun agreement using a task similar to the verb inï¬ ection task. To per- form the task, we compare the probabilities that the model assigns to the two forms of the verb that in fact occurred in the corpus (e.g., write and writes), and select the form with the higher probability.11 As this task is not part of the networkâ s training objec- tive, and the model needs to allocate considerable resources to predicting each word in the sentence, we expect the LM to perform worse than the explicitly supervised objectives. To train the network, we made half of the examples in our training corpus ungrammatical by ï¬ | 1611.01368#19 | 1611.01368#21 | 1611.01368 | [
"1602.08952"
]
|
1611.01368#21 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | ipping the number of the verb.10 The network read the entire sentence and received a supervision signal at the end. This task is modeled after a common human data col- lection technique in linguistics (Sch¨utze, 1996), al- though our training regime is of course very different to the training that humans are exposed to: humans rarely receive ungrammatical sentences labeled as such (Bowerman, 1988). Language modeling (LM): Finally, we experi- mented with a word prediction objective, in which the model did not receive any grammatically relevant supervision (Elman, 1990; Elman, 1991). In this sce- nario, the goal of the network is to predict the next word at each point in every sentence. It receives un- labeled sentences and is not speciï¬ cally instructed to Results: When considering all agreement depen- dencies, all models achieved error rates below 7% (Figure 4a); as mentioned above, even the noun-only number prediction baselines achieved error rates be- low 5% on this task. At the same time, there were large differences in accuracy across training objec- tives. The verb inï¬ ection network performed slightly but signiï¬ cantly better than the number prediction one (0.8% compared to 0.83% errors), suggesting that the semantic information carried by the verb is moderately helpful. The grammaticality judgment objective performed somewhat worse, at 2.5% errors, but still outperformed the noun-only baselines by a large margin, showing the capacity of the LSTM ar- chitecture to learn syntactic dependencies even given fairly indirect evidence. The worst performer was the language model. It 10In some sentences this will not in fact result in an ungram- matical sentence, e.g. with collective nouns such as group, which are compatible with both singular and plural verbs in some di- alects of English (Huddleston and Pullum, 2002); those cases appear to be rare. 11One could also imagine performing the equivalent of the number prediction task by aggregating LM probability mass over all plural verbs and all singular verbs. This approach may be more severely affected by part-of-speech ambiguous words than the one we adopted; we leave the exploration of this approach to future work. (b) (d) (e) (a) (c) | 1611.01368#20 | 1611.01368#22 | 1611.01368 | [
"1602.08952"
]
|
1611.01368#22 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | . Figure 4: Alternative tasks and additional experiments: (a) overall error rate across tasks (note that the y-axis ends in 10%); (b) effect of count of attractors in homogeneous dependencies across training objectives; (c) comparison of the Google LM (Jozefowicz et al., 2016) to our LM and one of our supervised verb inï¬ ection systems, on a sample of sentences; (d) number prediction: effect of count of attractors using SRNs with standard training or LSTM with targeted training; (e) number prediction: difference in error rate between singular and plural subjects across RNN cell types. Error bars represent binomial 95% conï¬ dence intervals. made eight times as many errors as the original num- ber prediction network (6.78% compared to 0.83%), and did substantially worse than the noun-only base- lines (though recall that the noun-only baselines were still explicitly trained to predict verb number). The differences across the networks are more strik- ing when we focus on dependencies with agreement attractors (Figure 4b). Here, the language model does worse than chance in the most difï¬ cult cases, and only slightly better than the noun-only baselines. The worse-than-chance performance suggests that attractors actively confuse the networks rather than cause them to make a random decision. The other models degrade more gracefully with the number of agreement attractors; overall, the grammaticality judgment objective is somewhat more difï¬ cult than the number prediction and verb inï¬ ection ones. In summary, we conclude that while the LSTM is capa- ble of learning syntax-sensitive agreement dependen- cies under various objectives, the language-modeling objective alone is not sufï¬ cient for learning such de- pendencies, and a more direct form of training signal is required. Comparison to a large-scale language model: One objection to our language modeling result is that our LM faced a much harder objective than our other modelsâ predicting a distribution over 10,000 vocabulary items is certainly harder than bi- nary classiï¬ cationâ but was equipped with the same capacity (50-dimensional hidden state and word vec- tors). Would the performance gap between the LM and the explicitly supervised models close if we in- creased the capacity of the LM? | 1611.01368#21 | 1611.01368#23 | 1611.01368 | [
"1602.08952"
]
|
1611.01368#23 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | We address this question using a very large pub- licly available LM (Jozefowicz et al., 2016), which we refer to as the Google LM.12 The Google LM rep- resent the current state-of-the-art in language mod- eling: it is trained on a billion-word corpus (Chelba et al., 2013), with a vocabulary of 800,000 words. It is based on a two-layer LSTM with 8192 units in each layer, or more than 300 times as many units as our LM; at 1.04 billion parameters it has almost 12 https://github.com/tensorflow/models/ tree/master/lm_1b | 1611.01368#22 | 1611.01368#24 | 1611.01368 | [
"1602.08952"
]
|
1611.01368#24 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | # subj. 4 2000 times as many parameters. It is a ï¬ ne-tuned language model that achieves impressive perplexity scores on common benchmarks, requires a massive infrastructure for training, and pushes the boundaries of whatâ s feasible with current hardware. We tested the Google LM with the methodology we used to test ours.13 Due to computational resource limitations, we did not evaluate it on the entire test set, but sampled a random selection of 500 sentences for each count of attractors (testing a single sentence under the Google LM takes around 5 seconds on average). The results are presented in Figure 4c, where they are compared to the performance of the supervised verb inï¬ | 1611.01368#23 | 1611.01368#25 | 1611.01368 | [
"1602.08952"
]
|
1611.01368#25 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | ection system. Despite having an order of magnitude more parameters and signiï¬ cantly larger training data, the Google LM performed poorly compared to the supervised models; even a single attractor led to a sharp increase in error rate to 28.5%, almost as high as our small-scale LM (32.6% on the same sentences). While additional attractors caused milder degradation than in our LM, the performance of the Google LM on sentences with four attractors was still worse than always guessing the majority class (SINGULAR). In summary, our experiments with the Google LM do not change our conclusions: the contrast between the poor performance of the LMs and the strong per- formance of the explicitly supervised objectives sug- gests that direct supervision has a dramatic effect on the modelâ s ability to learn syntax-sensitive de- pendencies. Given that the Google LM was already trained on several hundred times more data than the number prediction system, it appears unlikely that its relatively poor performance was due to lack of training data. # 6 Additional Experiments recurrent networks: Comparison to simple How much of the success of the network is due to the LSTM cells? We repeated the number prediction experiment with a simple recurrent network (SRN) (Elman, 1990), with the same number of hidden units. The SRNâ s performance was inferior to the LSTMâ s, but the average performance for a given 13One technical exception was that we did not replace low- frequency words with their part-of-speech, since the Google LM is a large-vocabulary language model, and does not have parts-of-speech as part of its vocabulary. number of agreement attractors does not suggest a qualitative difference between the cell types: the SRN makes about twice as many errors as the LSTM across the board (Figure 4d). Training only on difï¬ cult dependencies: Only a small proportion of the dependencies in the corpus had agreement attractors (Figure 2e). Would the network generalize better if dependencies with in- tervening nouns were emphasized during training? We repeated our number prediction experiment, this time training the model only on dependencies with at least one intervening noun (of any number). We doubled the proportion of training sentences to 20%, since the total size of the corpus was smaller (226K dependencies). | 1611.01368#24 | 1611.01368#26 | 1611.01368 | [
"1602.08952"
]
|
1611.01368#26 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | This training regime resulted in a 27% decrease in error rate on dependencies with exactly one attractor (from 4.1% to 3.0%). This decrease is statistically signiï¬ cant, and encouraging given that total number of dependencies in training was much lower, which complicates the learning of word embeddings. Error rates mildly decreased in dependencies with more attractors as well, suggesting some generalization (Figure 4d). Surprisingly, a similar experiment us- ing the grammaticality judgment task led to a slight increase in error rate. While tentative at this point, these results suggest that oversampling difï¬ cult train- ing cases may be beneï¬ cial; a curriculum progressing from easier to harder dependencies (Elman, 1993) may provide additional gains. # 7 Error Analysis Singular vs. plural subjects: Most of the nouns in English are singular: in our corpus, the fraction of singular subjects is 68%. Agreement attraction errors in humans are much more common when the attractor is plural than when it is singular (Bock and Miller, 1991; Eberhard et al., 2005). Do our modelsâ error rates depend on the number of the subject? As Figure 2b shows, our LSTM number prediction model makes somewhat more agreement attraction errors with plural than with singular attractors; the difference is statistically signiï¬ cant, but the asymme- try is much less pronounced than in humans. Inter- estingly, the SRN version of the model does show a large asymmetry, especially as the count of attractors increases; with four plural attractors the error rate reaches 60% (Figure 4e). Qualitative analysis: We manually examined a sample of 200 cases in which the majority of the 20 runs of the number prediction network made the wrong prediction. There were only 8890 such depen- dencies (about 0.6%). Many of those were straight- forward agreement attraction errors; others were dif- ï¬ cult to interpret. We mention here three classes of errors that can motivate future experiments. The networks often misidentiï¬ ed the heads of noun-noun compounds. In (17), for example, the models predict a singular verb even though the num- ber of the subject conservation refugees should be determined by its head refugees. This suggests that the networks didnâ | 1611.01368#25 | 1611.01368#27 | 1611.01368 | [
"1602.08952"
]
|
1611.01368#27 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | t master the structure of English noun-noun compounds.14 Conservation refugees live in a world col- ored in shades of gray; limbo. Information technology (IT) assets com- monly hold large volumes of conï¬ dential data. Some verbs that are ambiguous with plural nouns seem to have been misanalyzed as plural nouns and consequently act as attractors. The models predicted a plural verb in the following two sentences even though neither of them has any plural nouns, possibly because of the ambiguous verbs drives and lands: | 1611.01368#26 | 1611.01368#28 | 1611.01368 | [
"1602.08952"
]
|
1611.01368#28 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | The ship that the player drives has a very high speed. It was also to be used to learn if the area where the lander lands is typical of the sur- rounding terrain. Other errors appear to be due to difï¬ culty not in identifying the subject but in determining whether it is plural or singular. In Example (22), in particular, there is very little information in the left context of the subject 5 paragraphs suggesting that the writer considers it to be singular: Rabaul-based Japanese aircraft make three dive-bombing attacks. 14The dependencies are presented as they appeared in the corpus; the predicted number was the opposite of the correct one (e.g., singular in (17), where the original is plural). The lead is also rather long; 5 paragraphs is pretty lengthy for a 62 kilobyte article. The last errors point to a limitation of the number prediction task, which jointly evaluates the modelâ s ability to identify the subject and its ability to assign the correct number to noun phrases. # 8 Related Work The majority of NLP work on neural networks eval- uates them on their performance in a task such as language modeling or machine translation (Sunder- meyer et al., 2012; Bahdanau et al., 2015). These evaluation setups average over many different syn- tactic constructions, making it difï¬ | 1611.01368#27 | 1611.01368#29 | 1611.01368 | [
"1602.08952"
]
|
1611.01368#29 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | cult to isolate the networkâ s syntactic capabilities. Other studies have tested the capabilities of RNNs to learn simple artiï¬ cial languages. Gers and Schmid- huber (2001) showed that LSTMs can learn the context-free language anbn, generalizing to ns as high as 1000 even when trained only on n â {1, . . . , 10}. Simple recurrent networks struggled with this language (Rodriguez et al., 1999; Rodriguez, 2001). These results have been recently replicated and extended by Joulin and Mikolov (2015). Elman (1991) tested an SRN on a miniature lan- guage that simulated English relative clauses, and found that the network was only able to learn the language under highly speciï¬ c circumstances (El- man, 1993), though later work has called some of his conclusions into question (Rohde and Plaut, 1999; Cartling, 2008). Frank et al. (2013) studied the ac- quisition of anaphora coreference by SRNs, again in a miniature language. Recently, Bowman et al. (2015) tested the ability of LSTMs to learn an artiï¬ - cial language based on propositional logic. As in our study, the performance of the network degraded as the complexity of the test sentences increased. Karpathy et al. (2016) present analyses and visual- ization methods for character-level RNNs. K´ad´ar et al. (2016) and Li et al. (2016) suggest visualization techniques for word-level RNNs trained to perform tasks that arenâ t explicitly syntactic (image caption- ing and sentiment analysis). Early work that used neural networks to model grammaticality judgments includes Allen and Sei- denberg (1999) and Lawrence et al. (1996). More re- cently, the connection between grammaticality judg- ments and the probabilities assigned by a language model was explored by Clark et al. (2013) and Lau et al. (2015). | 1611.01368#28 | 1611.01368#30 | 1611.01368 | [
"1602.08952"
]
|
1611.01368#30 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | Finally, arguments for evaluating NLP models on a strategically sampled set of dependency types rather than a random sample of sentences have been made in the parsing literature (Rimell et al., 2009; Nivre et al., 2010; Bender et al., 2011). # 9 Discussion and Future Work Neural network architectures are typically evaluated on random samples of naturally occurring sentences, e.g., using perplexity on held-out data in language modeling. Since the majority of natural language sen- tence are grammatically simple, models can achieve high overall accuracy using ï¬ awed heuristics that fail on harder cases. This makes it difï¬ cult to distin- guish simple but robust sequence models from more expressive architectures (Socher, 2014; Grefenstette et al., 2015; Joulin and Mikolov, 2015). Our work suggests an alternative strategyâ evaluation on natu- rally occurring sentences that are sampled based on their grammatical complexityâ which can provide more nuanced tests of language models (Rimell et al., 2009; Bender et al., 2011). This approach can be extended to the training stage: neural networks can be encouraged to develop more sophisticated generalizations by oversampling grammatically challenging training sentences. | 1611.01368#29 | 1611.01368#31 | 1611.01368 | [
"1602.08952"
]
|
1611.01368#31 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | We took a ï¬ rst step in this direction when we trained the network only on dependencies with intervening nouns (Section 6). This training regime indeed im- proved the performance of the network; however, the improvement was quantitative rather than qualitative: there was limited generalization to dependencies that were even more difï¬ cult than those encountered in training. Further experiments are needed to establish the efï¬ cacy of this method. A network that has acquired syntactic represen- tations sophisticated enough to handle subject-verb agreement is likely to show improved performance on other structure-sensitive dependencies, including pronoun coreference, quantiï¬ er scope and negative polarity items. As such, neural models used in NLP applications may beneï¬ t from grammatically sophis- ticated sentence representations developed in a multi- task learning setup (Caruana, 1998), where the model is trained concurrently on the task of interest and on one of the tasks we proposed in this paper. Of course, grammatical phenomena differ from each other in many ways. The distribution of negative polarity items is highly sensitive to semantic factors (Gian- nakidou, 2011). Restrictions on unbounded depen- dencies (Ross, 1967) may require richer syntactic representations than those required for subject-verb dependencies. The extent to which the results of our study will generalize to other constructions and other languages, then, is a matter for empirical research. Humans occasionally make agreement attraction mistakes during language production (Bock and Miller, 1991) and comprehension (Nicol et al., 1997). These errors persist in human acceptability judg- ments (Tanner et al., 2014), which parallel our gram- maticality judgment task. Cases of grammatical agreement with the nearest rather than structurally rel- evant constituent have been documented in languages such as Slovenian (MaruË siË c et al., 2007), and have even been argued to be occasionally grammatical in English (Zwicky, 2005). In future work, explor- ing the relationship between these cases and neural network predictions can shed light on the cognitive plausibility of those networks. # 10 Conclusion LSTMs are sequence models; they do not have built- in hierarchical representations. | 1611.01368#30 | 1611.01368#32 | 1611.01368 | [
"1602.08952"
]
|
1611.01368#32 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | We have investigated how well they can learn subject-verb agreement, a phenomenon that crucially depends on hierarchical syntactic structure. When provided explicit supervi- sion, LSTMs were able to learn to perform the verb- number agreement task in most cases, although their error rate increased on particularly difï¬ cult sentences. We conclude that LSTMs can learn to approximate structure-sensitive dependencies fairly well given ex- plicit supervision, but more expressive architectures may be necessary to eliminate errors altogether. Fi- nally, our results provide evidence that the language modeling objective is not by itself sufï¬ cient for learn- ing structure-sensitive dependencies, and suggest that a joint training objective can be used to supplement language models on tasks for which syntax-sensitive dependencies are important. # Acknowledgments We thank Marco Baroni, Grzegorz ChrupaÅ a, Alexan- der Clark, Sol Lago, Paul Smolensky, Benjamin Spector and Roberto Zamparelli for comments and discussion. This research was supported by the European Research Council (grant ERC-2011-AdG 295810 BOOTPHON), the Agence Nationale pour la Recherche (grants ANR-10-IDEX-0001-02 PSL and ANR-10-LABX-0087 IEC) and the Israeli Science Foundation (grant number 1555/15). # References Joseph Allen and Mark S. Seidenberg. 1999. The emer- gence of grammaticality in connectionist networks. In Brian MacWhinney, editor, Emergentist approaches to language: Proceedings of the 28th Carnegie sym- posium on cognition, pages 115â 151. Mahwah, NJ: Erlbaum. | 1611.01368#31 | 1611.01368#33 | 1611.01368 | [
"1602.08952"
]
|
1611.01368#33 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In International Conference for Learning Representations. Emily M. Bender, Dan Flickinger, Stephan Oepen, and Yi Zhang. 2011. Parser evaluation over local and non-local deep dependencies in a large corpus. In Pro- ceedings of EMNLP, pages 397â 408. Kathryn Bock and Carol A. Miller. 1991. Broken agree- ment. Cognitive Psychology, 23(1):45â 93. Melissa Bowerman. 1988. The â no negative evidenceâ problem: How do children avoid constructing an overly general grammar? In John A. Hawkins, editor, Explain- ing language universals, pages 73â 101. Oxford: Basil Blackwell. Samuel R. Bowman, Christopher D. Manning, and Christopher Potts. 2015. | 1611.01368#32 | 1611.01368#34 | 1611.01368 | [
"1602.08952"
]
|
1611.01368#34 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | Tree-structured composi- tion in neural networks without tree-structured archi- tectures. In Proceedings of the NIPS Workshop on Cog- nitive Computation: Integrating Neural and Symbolic Approaches. Bo Cartling. 2008. On the implicit acquisition of a context-free grammar by a simple recurrent neural net- work. Neurocomputing, 71(7):1527â 1537. Rich Caruana. 1998. Multitask learning. In Sebastian Thrun and Lorien Pratt, editors, Learning to learn, pages 95â 133. Boston: Kluwer. Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robin- son. 2013. One billion word benchmark for measur- ing progress in statistical language modeling. arXiv preprint arXiv:1312.3005. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase repre- sentations using RNN encoderâ decoder for statistical machine translation. In Proceedings of EMNLP, pages 1724â 1734. Noam Chomsky. 1965. Aspects of the Theory of Syntax. Cambridge, MA: MIT press. Alexander Clark, Gianluca Giorgolo, and Shalom Lap- pin. 2013. Statistical representation of grammaticality judgements: The limits of n-gram models. In Proceed- ings of the Fourth Annual Workshop on Cognitive Mod- eling and Computational Linguistics (CMCL), pages 28â 36. Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and A. Noah Smith. 2016. | 1611.01368#33 | 1611.01368#35 | 1611.01368 | [
"1602.08952"
]
|
1611.01368#35 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | Recurrent neural network gram- mars. In Proceedings of NAACL/HLT, pages 199â 209. Kathleen M. Eberhard, J. Cooper Cutting, and Kathryn Bock. 2005. Making syntax of sense: Number agree- ment in sentence production. Psychological Review, 112(3):531â 559. Jeffrey L. Elman. 1990. Finding structure in time. Cogni- tive Science, 14(2):179â 211. Jeffrey L. Elman. 1991. Distributed representations, sim- ple recurrent networks, and grammatical structure. Ma- chine Learning, 7(2-3):195â 225. Jeffrey L. Elman. 1993. Learning and development in neu- ral networks: The importance of starting small. Cogni- tion, 48(1):71â 99. Martin B. H. Everaert, Marinus A. C. Huybregts, Noam Chomsky, Robert C. Berwick, and Johan J. Bolhuis. 2015. Structures, not strings: Linguistics as part of the cognitive sciences. Trends in Cognitive Sciences, 19(12):729â | 1611.01368#34 | 1611.01368#36 | 1611.01368 | [
"1602.08952"
]
|
1611.01368#36 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | 743. Robert Frank, Donald Mathis, and William Badecker. 2013. The acquisition of anaphora by simple recur- rent networks. Language Acquisition, 20(3):181â 227. Felix Gers and J¨urgen Schmidhuber. 2001. LSTM re- current networks learn simple context-free and context- sensitive languages. IEEE Transactions on Neural Net- works, 12(6):1333â 1340. Anastasia Giannakidou. 2011. | 1611.01368#35 | 1611.01368#37 | 1611.01368 | [
"1602.08952"
]
|
1611.01368#37 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | Negative and positive polarity items: Variation, licensing, and compositional- ity. In Claudia Maienborn, Klaus von Heusinger, and Paul Portner, editors, Semantics: An international hand- book of natural language meaning. Berlin: Mouton de Gruyter. Yoav Goldberg and Joakim Nivre. 2012. A dynamic ora- cle for arc-eager dependency parsing. In Proceedings of COLING 2012, pages 959â 976. Edward Grefenstette, Karl Moritz Hermann, Mustafa Su- leyman, and Phil Blunsom. 2015. | 1611.01368#36 | 1611.01368#38 | 1611.01368 | [
"1602.08952"
]
|
1611.01368#38 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | Learning to trans- duce with unbounded memory. In Advances in Neural Information Processing Systems, pages 1828â 1836. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735â 1780. Rodney Huddleston and Geoffrey K. Pullum. 2002. The Cambridge Grammar of the English Language. Cam- bridge University Press, Cambridge. | 1611.01368#37 | 1611.01368#39 | 1611.01368 | [
"1602.08952"
]
|
1611.01368#39 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | Inferring algorithmic patterns with stack-augmented recurrent nets. In Advances in Neural Information Processing Systems, pages 190â 198. Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Exploring arXiv preprint Shazeer, and Yonghui Wu. the limits of language modeling. arXiv:1602.02410. 2016. ´Akos K´ad´ar, Grzegorz ChrupaÅ a, and Afra Alishahi. 2016. | 1611.01368#38 | 1611.01368#40 | 1611.01368 | [
"1602.08952"
]
|
1611.01368#40 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | Representation of linguistic form and func- arXiv preprint tion in recurrent neural networks. arXiv:1602.08952. Andrej Karpathy, Justin Johnson, and Fei-Fei Li. 2016. Visualizing and understanding recurrent networks. Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Confer- ence for Learning Representations. Eliyahu Kiperwasser and Yoav Goldberg. 2016. Simple and accurate dependency parsing using bidirectional lstm feature representations. Transactions of the Asso- ciation of Computational Linguistics, 4:313â 327. Jey Han Lau, Alexander Clark, and Shalom Lappin. 2015. Unsupervised prediction of acceptability judgements. In Proceedings of ACL/IJCNLP, pages 1618â 1628. Steve Lawrence, Lee C. Giles, and Santliway Fong. 1996. | 1611.01368#39 | 1611.01368#41 | 1611.01368 | [
"1602.08952"
]
|
1611.01368#41 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | Can recurrent neural networks learn natural language grammars? In IEEE International Conference on Neu- ral Networks, volume 4, pages 1853â 1858. Willem J. M. Levelt, Ardi Roelofs, and Antje S. Meyer. 1999. A theory of lexical access in speech production. Behavioral and Brain Sciences, 22(1):1â 75. Jiwei Li, Xinlei Chen, Eduard H. Hovy, and Dan Jurafsky. 2016. Visualizing and understanding neural models in NLP. In Proceedings of NAACL-HLT 2016, pages 681â | 1611.01368#40 | 1611.01368#42 | 1611.01368 | [
"1602.08952"
]
|
1611.01368#42 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | 691. Franc MaruË siË c, Andrew Nevins, and Amanda Saksida. 2007. Last-conjunct agreement in Slovenian. In An- nual Workshop on Formal Approaches to Slavic Lin- guistics, pages 210â 227. Tomas Mikolov, Martin Karaï¬ Â´at, Lukas Burget, Jan Cer- nock`y, and Sanjeev Khudanpur. 2010. Recurrent neu- ral network based language model. In INTERSPEECH, pages 1045â 1048. Janet L. Nicol, Kenneth I. Forster, and Csaba Veres. 1997. Subjectâ verb agreement processes in comprehension. Journal of Memory and Language, 36(4):569â 587. | 1611.01368#41 | 1611.01368#43 | 1611.01368 | [
"1602.08952"
]
|
1611.01368#43 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | Joakim Nivre, Laura Rimell, Ryan McDonald, and Carlos Gomez-Rodriguez. 2010. Evaluation of dependency parsers on unbounded dependencies. In Proceedings of the 23rd International Conference on Computational Linguistics, pages 833â 841. Association for Computa- tional Linguistics. Laura Rimell, Stephen Clark, and Mark Steedman. 2009. Unbounded dependency recovery for parser evaluation. In Proceedings of EMNLP, pages 813â 821. Paul Rodriguez, Janet Wiles, and Jeffrey L. Elman. 1999. | 1611.01368#42 | 1611.01368#44 | 1611.01368 | [
"1602.08952"
]
|
1611.01368#44 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | A recurrent neural network that learns to count. Con- nection Science, 11(1):5â 40. Paul Rodriguez. 2001. Simple recurrent networks learn context-free and context-sensitive languages by count- ing. Neural Computation, 13(9):2093â 2118. Douglas L. T. Rohde and David C. Plaut. 1999. Language acquisition in the absence of explicit negative evidence: How important is starting small? Cognition, 72(1):67â 109. John Robert Ross. 1967. | 1611.01368#43 | 1611.01368#45 | 1611.01368 | [
"1602.08952"
]
|
1611.01368#45 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | Constraints on variables in syntax. Ph.D. thesis, MIT. Carson T. Sch¨utze. 1996. The empirical base of linguis- tics: Grammaticality judgments and linguistic method- ology. Chicago, IL: University of Chicago Press. Richard Socher. 2014. Recursive Deep Learning for Natural Language Processing and Computer Vision. Ph.D. thesis, Stanford University. Adrian Staub. 2009. On the interpretation of the number attraction effect: Response time evidence. Journal of Memory and Language, 60(2):308â 327. Martin Sundermeyer, Ralf Schl¨uter, and Hermann Ney. 2012. LSTM neural networks for language modeling. In INTERSPEECH. Darren Tanner, Janet Nicol, and Laurel Brehm. 2014. The time-course of feature interference in agreement com- prehension: Multiple mechanisms and asymmetrical attraction. Journal of Memory and Language, 76:195â 215. Oriol Vinyals, Å ukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015. | 1611.01368#44 | 1611.01368#46 | 1611.01368 | [
"1602.08952"
]
|
1611.01368#46 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | Grammar as a foreign language. In Advances in Neural Information Processing Systems, pages 2755â 2763. Arnold Zwicky. 2005. Agreement with nearest always http://itre.cis.upenn.edu/Ë myl/ bad? languagelog/archives/001846.html. # Unit 2: PP Unit 0: PP, # Unit 0: RC # Unit 1: PP # Unit 1: RC # Unit 2: RC # Unit3:PP 10 10 y 0.2 s,Â¥s 0.2 ss _ , SY Oa 0.4 0.2 0.2 0.0 0.0 05 05 02 0.2 0.0 Y 0.0 Y. 0.2 0.2 ve 1Y 00 â Ys 0.0 Ys 0.2 SY 92 Bey 4 A 0.0 Ys 0.0 02 sÂ¥si0-2 YY -0.4 04 38 ve 38 05 08 Ary 24 ysis 38 -10 YY =10 vou ys" OB 08 â 08 SYS-08 s.Â¥s FPF SCOP CELSO CPF CL fees SPF ECP COP EFEEO CLF ECO CPSELLM Unit 4: PP Unit 4: RC UnitS: PP. it5:RC yoy. Unit 6: PP Unit6: RC yoy. Unit 7: PP Unit 7: RC â , Ys 0.8 0.8 : 02 y y 34 0s cy °° 0s SYS 0.8 YS os os i ay Lf S=q XY 0.0 "0.0 SY 90 sy 0.0 04 04 ws AA v9.4 Y s,Y Ys 0.5 y 05 YY 05 05 gy 0.2 YY 0.2 ca wees SA â | 1611.01368#45 | 1611.01368#47 | 1611.01368 | [
"1602.08952"
]
|
1611.01368#47 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | YS YS 40 YS a9 00 VSS 0.0 1S ee 4 s a) we ) se we a we 4 s we w mC) Ra w a we a) s a) we @ ee w a) we A) S a) we rt) Ka we ro Unit 8: PP Unit 8: RC Unit 9: PP Unit 9: RC Unit 10: PP Unit 10: RC Unit 11: PP Unit 11: RC s,Ys s,Ys 0.0 YS 0.0 YS 1.0 sy 10 s.YS o2 0.2 WON 0.8 0.8 ~02 ayvs-0.2 os SYS os ve Et hav 99 SONG et oe v3 a ay. 5 bo Â¥ 00 0.4 0.4 5 04 0.4 ~06 0.6 â , ooâ 0.6 0.6 0.2 0.2 ~038 SY 0.8 , os 05 Ss 8 VE -08 0.0 Y oo WY 10 -1.0 Y 10 -1.0 Y > ¢ Â¥ 2 & & FO F Cw FO F FO e O F CO eeeee FO Fo CO FO F SO Unit 12:PP Unit 12: RC y Unit 13: PP YS Unit 13: RC Unit 14: PP Unit 14: RC Unit 15: PP Unit 15: RC s, s, : 04 0.4 0.4 0.4 Ys 0.0 0.0 YS° 0.0 0.0 Ys -02 SY go 0.0 0.0 Y 05 YS os -0.2 vy. 22 ~04 y OA We0.5 Ys os SY â a0 Y 40 Y a4 Sg4 ~06 Ys 0.6 ey Ys FOF SP CP FEO CPF ESO CPF ECOP CPF CPO KC OFF O CPF CMH CP FEL Unit 16: PP Unit 16: RC Unit 17: PP Unit 17: RC vs Unit18:PP yo... Unit 18: RC Unit 19: | 1611.01368#46 | 1611.01368#48 | 1611.01368 | [
"1602.08952"
]
|
1611.01368#48 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | PP Unit 19: RC a8 a8 sys 88 oa y 08 nv O4 SYS 04 SY pn" v0.6 0.6 .Ys oy 0.0 oo 93 SY 93 sy 02 YS â oa SYS 9 4 0.4 0.2 0.2 -06 0.6 5 S,YS-0.4 â ys 04 08 YY 08 sy 0.2 ye 0.2 sy eeiee. COPECO CPF ECO CPSP CLF ECO ECP SEO CPF SCO CPE M Unit 20: PP Unit 20: RC Unit 21: PP Unit 21: RC Unit 22: PP Unit 22: RC Unit 23: PP Unit 23: RC xsY 08 08 SYS9,10 0.10 Ys 0.2 0.2 0.2 0.2 06 0.6 0.05 0.05 0.0 0.0 sy 0.1 a O21 y%,, 04 0.4 0.00 sifsao0 Ys 0.2 s,Â¥s-0.2 sys ° oe Bis 02 ow 82 60-08 07-005 4 Ys 0.4 Ys 0.2 BÂ¥5-0.2 â 2 eps 02 0.15 YY ~0.15 FPF SQ CP SEO CPF EO CSCO CPF ECO EC PSFO CLE CO CPS EL Unit 24: PP Unit24:RC Unit 25: PP Unit 25: RC Unit 26: PP Unit 26: RC Unit 27: PP Unit 27: | 1611.01368#47 | 1611.01368#49 | 1611.01368 | [
"1602.08952"
]
|
1611.01368#49 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | RC YY 1.0 yy (10 NY ya 97 Y 97 05 Y 05 0s ys 05 Ys 02 YS go y 08 05 0.0 Ys 0.0 Ys 0.0 s.Â¥s 0.0 SYS 00 0.0 Ys 93 $3 Y SYS 7 05 05 sYS95 SY 98 ~02 02 Sys 92 WS 02 Zi -1.0 SY 40 SY 04 Â¥s04 0.0 Xs,YS 0.0 siÂ¥s FPF CO CPE ECO ECPI ECO CPE CO CPF ECM CPS EO CLE SO CP ELL Unit 28: PP Unit 28: RC Unit 29: PP Unit 29: RC Unit 30: PP Unit 30: RC Unit 31: PP Unit 31: RC 0.0 S.YS 0.0 s\Ys 10 s,Ys 1.0 ss 9.9 $9 Ys 05 0.5 -0.2 SY 9.2 MS os SY 05 sy 02 Ys 02 0.0 Y oo YY 04 4 ; " 33 s,Â¥s0-3 Sys 05 SYS 95 i Oe Os v i Y: 38 * 38 oY 5 | A YS . . SÂ¥s-10 na -1.0 0S "Ys os 0.7 sÂ¥ 0.7 SPF SCOP CLEFECO CLI ECO CP ECO CPF CMH CPF CLI SO CLELL Unit 32: PP Unit 32: RC Unit 33: PP Unit 33: RC Unit 34: PP Unit 34: RC Unit 35: PP Unit 35:RC , YY og YY 08 aa s Sy i sv. Oe sY 05 05 06 0.6 02 ys 0? SYS ba Sys ba s.Ys SY 94 Ys 0.4 oa â | 1611.01368#48 | 1611.01368#50 | 1611.01368 | [
"1602.08952"
]
|
1611.01368#50 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | S° 9.0 0.0 Ys 0.0 2 ; a3 ee te a wed ene 2 0.1 01 08 SÂ¥s05 ey Fs-02 SYS 33 04 y 24 y 0.0 0.0 -1.0 -1.0 â SPF ECP CP FEO CPF ECP CPF ECP CPF ESO COFSLP CLF LSM OF FO Unit 36: PP Unit 36: RC Unit 37: PP Unit 37: RC Unit 38: PP Unit 38: RC Unit 39: PP Unit 39: RC 03 YS 93 ys vs 10 Ys 10 1.0 1.0 92 Sys 93 Sys â Y oo 0s â Y 05 05 y 05 Y of sy $9 SY 90 SYS gg See 00 oY 0.0 O< 0.0 S00 s,Y 0.2 02 : : 05 05 SYS 03 0.3 02 sy 0? ay SYS yes? 8 VPs we 4¢ $ we wv A) & vw a) wv rn) $ we wv << w aw w 4 $ Â¥ ro) we 4 & wv > vw nC) s ro) vw © se we a) Unit 40: PP Unit 40: RC Unit 41: PP Unit 41: RC Unit42:PP Unit 42: RC Unit 43: PP Unit 43: | 1611.01368#49 | 1611.01368#51 | 1611.01368 | [
"1602.08952"
]
|
1611.01368#51 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | RC 04 YS o4 05 s,Ys 0.5 05 SY 05 SY Â¥s 02 Y 92 y 03 y 3 sys 04 0.4 05 sÂ¥s 0.5 0.0 y 20 0.2 *" 92 03 0.3 Ss, XY 0.2 SY 0.2 a 0.2 0.1 y 0.2 yy 0.0 0.0 OA s,Â¥s-0.4 0.0 YY 00 Â¥ ° oe a1 05 Y os 0.6 0.6 Ys 0-1 Ys 2 oa nts 0 ons? ye # OS FO FW SF CO FO F CW FWD SF FO FO F CO FO SF FO FW o SO FO SF FO Unit 44: PP Unit 44: RC Unit 45: PP Unit 45: RC Unit46:PP Unit 46: RC Unit 47: PP Unit 47: RC 1.0 Y 10 .Y .Y Â¥ SY 10 1.0 .Y 08 Ys 0.8 0.8 0.8 Bry 0s 0s s,s VS iG oe oe spb 0.6 0.6 Ys Ys 0.5 sy 05 ~ 33 33 02 oa os ae 28 ayaa ss , . Ys 04 05 05 ay hay â YS 00 s.Ys 0.0 BÂ¥s-10 SY 1.0 yy 08 os we rr) $ we wv A) & vw a) wv rn) $ we wv << w aw w 4 $ Â¥ ro) we 4 & wv > vw nC) s ro) vw © se we a) Unit 48: PP Unit 48: RC Unit 49: PP yoy. Unit 49: RC 0.4 0.4 0.0 5 0.0 e\Â¥s 0.2 0.2 Y 0.1 sy D2 0.0 0.0 s\Â¥s~0.2 0.2 2 4 06 0.8 # SPF | 1611.01368#50 | 1611.01368#52 | 1611.01368 | [
"1602.08952"
]
|
1611.01368#52 | Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies | # EQ sVÂ¥s-0.2 4 06 vy 0.8 3 33 sy 04 4 my Ve 37 7 COEF SO CLF ESD # CLES # Y # s.Y Figure 5: Activation plots for all units (see Figure 3a and text in p. 7). # Unit 3: RC | 1611.01368#51 | 1611.01368 | [
"1602.08952"
]
|
|
1611.01224#0 | Sample Efficient Actor-Critic with Experience Replay | 7 1 0 2 l u J 0 1 ] G L . s c [ 2 v 4 2 2 1 0 . 1 1 6 1 : v i X r a Published as a conference paper at ICLR 2017 # SAMPLE EFFICIENT ACTOR-CRITIC WITH EXPERIENCE REPLAY Ziyu Wang DeepMind [email protected] Victor Bapst DeepMind [email protected] Nicolas Heess DeepMind [email protected] Volodymyr Mnih DeepMind [email protected] Remi Munos DeepMind [email protected] Koray Kavukcuoglu DeepMind [email protected] Nando de Freitas DeepMind, CIFAR, Oxford University [email protected] | 1611.01224#1 | 1611.01224 | [
"1602.01783"
]
|
|
1611.01224#1 | Sample Efficient Actor-Critic with Experience Replay | # ABSTRACT This paper presents an actor-critic deep reinforcement learning agent with ex- perience replay that is stable, sample efï¬ cient, and performs remarkably well on challenging environments, including the discrete 57-game Atari domain and several continuous control problems. To achieve this, the paper introduces several inno- vations, including truncated importance sampling with bias correction, stochastic dueling network architectures, and a new trust region policy optimization method. # INTRODUCTION Realistic simulated environments, where agents can be trained to learn a large repertoire of cognitive skills, are at the core of recent breakthroughs in AI (Bellemare et al., 2013; Mnih et al., 2015; Schulman et al., 2015a; Narasimhan et al., 2015; Mnih et al., 2016; Brockman et al., 2016; Oh et al., 2016). With richer realistic environments, the capabilities of our agents have increased and improved. Unfortunately, these advances have been accompanied by a substantial increase in the cost of simulation. In particular, every time an agent acts upon the environment, an expensive simulation step is conducted. Thus to reduce the cost of simulation, we need to reduce the number of simulation steps (i.e. samples of the environment). This need for sample efï¬ ciency is even more compelling when agents are deployed in the real world. Experience replay (Lin, 1992) has gained popularity in deep Q-learning (Mnih et al., 2015; Schaul et al., 2016; Wang et al., 2016; Narasimhan et al., 2015), where it is often motivated as a technique for reducing sample correlation. Replay is actually a valuable tool for improving sample efï¬ ciency and, as we will see in our experiments, state-of-the-art deep Q-learning methods (Schaul et al., 2016; Wang et al., 2016) have been up to this point the most sample efï¬ cient techniques on Atari by a signiï¬ | 1611.01224#0 | 1611.01224#2 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#2 | Sample Efficient Actor-Critic with Experience Replay | cant margin. However, we need to do better than deep Q-learning, because it has two important limitations. First, the deterministic nature of the optimal policy limits its use in adversarial domains. Second, ï¬ nding the greedy action with respect to the Q function is costly for large action spaces. Policy gradient methods have been at the heart of signiï¬ cant advances in AI and robotics (Silver et al., 2014; Lillicrap et al., 2015; Silver et al., 2016; Levine et al., 2015; Mnih et al., 2016; Schulman et al., 2015a; Heess et al., 2015). Many of these methods are restricted to continuous domains or to very speciï¬ c tasks such as playing Go. The existing variants applicable to both continuous and discrete domains, such as the on-policy asynchronous advantage actor critic (A3C) of Mnih et al. (2016), are sample inefï¬ cient. The design of stable, sample efï¬ | 1611.01224#1 | 1611.01224#3 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#3 | Sample Efficient Actor-Critic with Experience Replay | cient actor critic methods that apply to both continuous and discrete action spaces has been a long-standing hurdle of reinforcement learning (RL). We believe this paper 1 Published as a conference paper at ICLR 2017 is the ï¬ rst to address this challenge successfully at scale. More speciï¬ cally, we introduce an actor critic with experience replay (ACER) that nearly matches the state-of-the-art performance of deep Q-networks with prioritized replay on Atari, and substantially outperforms A3C in terms of sample efï¬ ciency on both Atari and continuous control domains. ACER capitalizes on recent advances in deep neural networks, variance reduction techniques, the off-policy Retrace algorithm (Munos et al., 2016) and parallel training of RL agents (Mnih et al., 2016). Yet, crucially, its success hinges on innovations advanced in this paper: truncated importance sampling with bias correction, stochastic dueling network architectures, and efï¬ cient trust region policy optimization. On the theoretical front, the paper proves that the Retrace operator can be rewritten from our proposed truncated importance sampling with bias correction technique. | 1611.01224#2 | 1611.01224#4 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#4 | Sample Efficient Actor-Critic with Experience Replay | # 2 BACKGROUND AND PROBLEM SETUP Consider an agent interacting with its environment over discrete time steps. At time step t, the agent Rnx, chooses an action at according to a policy observes the nx-dimensional state vector xt â X â R produced by the environment. We will consider discrete xt) and observes a reward signal rt â Ï (a | Rna in Section 5. actions at â { 1, 2, . . . , Na} iâ ¥0 γirt+i in expectation. The The goal of the agent is to maximize the discounted return Rt = discount factor γ [0, 1) trades-off the importance of immediate and future rewards. For an agent following policy Ï , we use the standard deï¬ nitions of the state-action and state only value functions: # in Sections 3 and 4, and continuous actions at â A â QÏ (xt, at) = Ext+1:â ,at+1:â [ Rt| V Ï (xt) = Eat [QÏ (xt, at) | xt, at] and xt] . Here, the expectations are with respect to the observed environment states xt and the actions generated by the policy Ï , where xt+1:â denotes a state trajectory starting at time t + 1. | 1611.01224#3 | 1611.01224#5 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#5 | Sample Efficient Actor-Critic with Experience Replay | We also need to deï¬ ne the advantage function AÏ (xt, at) = QÏ (xt, at) relative measure of value of each action since Eat [AÏ (xt, at)] = 0. xt) can be updated using the discounted approxi- The parameters θ of the differentiable policy Ï Î¸(at| mation to the policy gradient (Sutton et al., 2000), which borrowing notation from Schulman et al. (2015b), is deï¬ | 1611.01224#4 | 1611.01224#6 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#6 | Sample Efficient Actor-Critic with Experience Replay | ned as: AÏ (xt, at) â θ log Ï Î¸(at| xt) . (1) 9 = Exp.2 00.0 | t>0 ofjSchulman et Following Proposition 1 of Schulman et al. (2015b), we can replace AÏ (xt, at) in the above expression with the state-action value QÏ (xt, at), the discounted return Rt, or the temporal difference residual V Ï (xt), without introducing bias. These choices will however have different rt + γV Ï (xt+1) variance. Moreover, in practice we will approximate these quantities with neural networks thus introducing additional approximation errors and biases. Typically, the policy gradient estimator using Rt will have higher variance and lower bias whereas the estimators using function approximation will have higher bias and lower variance. Combining Rt with the current value function approximation to minimize bias while maintaining bounded variance is one of the central design principles behind ACER. To trade-off bias and variance, the asynchronous advantage actor critic (A3C) of Mnih et al. (2016) uses a single trajectory sample to obtain the following gradient approximation: k-1 ge = > ((: vn + VG (14K) - vite) Vo log ma(a: |). (2) t>0 i=0 t>0 i=0 | 1611.01224#5 | 1611.01224#7 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#7 | Sample Efficient Actor-Critic with Experience Replay | A3C combines both k-step returns and function approximation to trade-off variance and bias. We may think of V Ï Î¸v In the following section, we will introduce the discrete-action version of ACER. ACER may be understood as the off-policy counterpart of the A3C method of Mnih et al. (2016). As such, ACER builds on all the engineering innovations of A3C, including efï¬ cient parallel CPU computation. 2 Published as a conference paper at ICLR 2017 xt) and the value function ACER uses a single deep neural network to estimate the policy Ï Î¸(at| V Ï (xt). (For clarity and generality, we are using two different symbols to denote the parameters of θv the policy and value function, θ and θv, but most of these parameters are shared in the single neural network.) Our neural networks, though building on the networks used in A3C, will introduce several modiï¬ cations and new modules. # 3 DISCRETE ACTOR CRITIC WITH EXPERIENCE REPLAY Off-policy learning with experience replay may appear to be an obvious strategy for improving the sample efï¬ ciency of actor-critics. However, controlling the variance and stability of off-policy estimators is notoriously hard. Importance sampling is one of the most popular approaches for off- policy learning (Meuleau et al., 2000; Jie & Abbeel, 2010; Levine & Koltun, 2013). | 1611.01224#6 | 1611.01224#8 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#8 | Sample Efficient Actor-Critic with Experience Replay | In our context, it , proceeds as follows. Suppose we retrieve a trajectory xk) } where the actions have been sampled according to the behavior policy µ, from our memory of experiences. Then, the importance weighted policy gradient is given by: k k /k gm = (1 a) > (> a) Vo log mo(at|x2), (3) 1=0 t=0 \i=0 t=0 1=0 \i=0 where Ï t = Ï (at|xt) µ(at|xt) denotes the importance weight. This estimator is unbiased, but it suffers from very high variance as it involves a product of many potentially unbounded importance weights. To prevent the product of importance weights from exploding, Wawrzy´nski (2009) truncates this product. Truncated importance sampling over entire trajectories, although bounded in variance, could suffer from signiï¬ | 1611.01224#7 | 1611.01224#9 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#9 | Sample Efficient Actor-Critic with Experience Replay | cant bias. Recently, Degris et al. (2012) attacked this problem by using marginal value functions over the limiting distribution of the process to yield the following approximation of the gradient: # gmarg = Ext⠼β,at⠼µ [Ï tâ θ log Ï Î¸(at| xt)QÏ (xt, at)] , (4) where Ext⠼β,at⠼µ[ to the limiting distribution β(x) = ] · x0, µ) with behavior policy µ. To keep the notation succinct, we will replace limtâ â P (xt = x | Ext⠼β,at⠼µ[ ] with Extat[ · Two important facts about equation (4) must be highlighted. First, note that it depends on QÏ and not on Qµ, consequently we must be able to estimate QÏ | 1611.01224#8 | 1611.01224#10 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#10 | Sample Efficient Actor-Critic with Experience Replay | . Second, we no longer have a product of importance weights, but instead only need to estimate the marginal importance weight Ï t. Importance sampling in this lower dimensional space (over marginals as opposed to trajectories) is expected to exhibit lower variance. ] and ensure we remind readers of this when necessary. · Degris et al. (2012) estimate QÏ in equation (4) using lambda returns: Rλ λ)γV (xt+1) + Î»Î³Ï t+1Rλ t+1. This estimator requires that we know how to choose λ ahead of time to trade off bias and variance. Moreover, when using small values of λ to reduce variance, occasional large importance weights can still cause instability. In the following subsection, we adopt the Retrace algorithm of Munos et al. (2016) to estimate QÏ | 1611.01224#9 | 1611.01224#11 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#11 | Sample Efficient Actor-Critic with Experience Replay | . Subsequently, we propose an importance weight truncation technique to improve the stability of the off-policy actor critic of Degris et al. (2012), and introduce a computationally efï¬ cient trust region scheme for policy optimization. The formulation of ACER for continuous action spaces will require further innovations that are advanced in Section 5. 3.1 MULTI-STEP ESTIMATION OF THE STATE-ACTION VALUE FUNCTION In this paper, we estimate QÏ (xt, at) using Retrace (Munos et al., 2016). (We also experimented with the related tree backup method of Precup et al. (2000) but found Retrace to perform better in practice.) Given a trajectory generated under the behavior policy µ, the Retrace estimator can be expressed recursively as follows1: Qret(xt, at) = rt + γ Â¯Ï t+1[Qret(xt+1, at+1) Q(xt+1, at+1)] + γV (xt+1), (5) | 1611.01224#10 | 1611.01224#12 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#12 | Sample Efficient Actor-Critic with Experience Replay | â 1For ease of presentation, we consider only λ = 1 for Retrace. 3 Published as a conference paper at ICLR 2017 where Â¯Ï t is the truncated importance weight, Â¯Ï t = min µ(at|xt) , Q is the current value estimate of QÏ , and V (x) = Eaâ ¼Ï Q(x, a). Retrace is an off-policy, return-based algorithm which has low variance and is proven to converge (in the tabular case) to the value function of the target policy for any behavior policy, see Munos et al. (2016). The recursive Retrace equation depends on the estimate Q. To compute it, in discrete action spaces, we adopt a convolutional neural network with â | 1611.01224#11 | 1611.01224#13 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#13 | Sample Efficient Actor-Critic with Experience Replay | two headsâ that outputs the estimate Qθv (xt, at), as xt). This neural representation is the same as in (Mnih et al., 2016), with the well as the policy Ï Î¸(at| exception that we output the vector Qθv (xt, at) instead of the scalar Vθv (xt). The estimate Vθv (xt) can be easily derived by taking the expectation of Qθv under Ï Î¸. To approximate the policy gradient gmarg, ACER uses Qret to estimate QÏ . As Retrace uses multi- step returns, it can signiï¬ cantly reduce bias in the estimation of the policy gradient 2. To learn the critic Qθv (xt, at), we again use Qret(xt, at) as a target in a mean squared error loss and update its parameters θv with the following standard gradient: (Qret(xt, at) Qθv (xt, at)) (6) # â θv Qθv (xt, at)). â | 1611.01224#12 | 1611.01224#14 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#14 | Sample Efficient Actor-Critic with Experience Replay | Because Retrace is return-based, it also enables faster learning of the critic. Thus the purpose of the multi-step estimator Qret in our setting is twofold: to reduce bias in the policy gradient, and to enable faster learning of the critic, hence further reducing bias. IMPORTANCE WEIGHT TRUNCATION WITH BIAS CORRECTION The marginal importance weights in Equation (4) can become large, thus causing instability. To safe-guard against high variance, we propose to truncate the importance weights and introduce a correction term via the following decomposition of gmarg: gmarg = Extat [Ï | 1611.01224#13 | 1611.01224#15 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#15 | Sample Efficient Actor-Critic with Experience Replay | tâ θlog Ï Î¸(at| Eat[Â¯Ï tâ θlog Ï Î¸(at| where Â¯Ï t = min Ï t(a) = Ï (a|xt) expectations are with respect to the limiting state distribution under the behavior policy: xt â ¼ at â ¼ The clipping of the importance weight in the ï¬ rst term of equation (7) ensures that the variance of the gradient estimate is bounded. The correction term (second term in equation (7)) ensures that our estimate is unbiased. Note that the correction term is only active for actions such that Ï t(a) > c. In particular, if we choose a large value for c, the correction term only comes into effect when the variance of the original off-policy estimator of equation (4) is very high. When this happens, our decomposition has the nice property that the truncated weight in the ï¬ rst term is at most c while the correction weight Ï t(a)â c Ï t(a) in the second term is at most 1. + We model QÏ (xt, a) in the correction term with our neural network approximation Qθv (xt, at). This modiï¬ cation results in what we call the truncation with bias correction trick, in this case applied to the function # â θ log Ï Î¸(at| Cc ge = #, | lpiVae To (ara )Q" (we, ay) | +E (Me Volog mo(alx1)Qo, (rt, a (8) + | 1611.01224#14 | 1611.01224#16 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#16 | Sample Efficient Actor-Critic with Experience Replay | Equation involves an expectation over the stationary distribution of the Markov process. We can however approximate it by sampling trajectories {x9, a0, 10, H(-|Z0),++* Tk, Ak, Th MC|Ze) f x0, a0, r0, µ( { } 2An alternative to Retrace here is Q(λ) with off-policy corrections (Harutyunyan et al., 2016) which we | · · discuss in more detail in Appendix B. 4 | | 1611.01224#15 | 1611.01224#17 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#17 | Sample Efficient Actor-Critic with Experience Replay | Published as a conference paper at ICLR 2017 xt) are the policy vectors. Given these generated from the behavior policy µ. Here the terms µ( ·| trajectories, we can compute the off-policy ACER gradient: = Â¯Ï tâ θ log Ï Î¸(at| Ï t(a) â Ï t(a) # gacer t prVo log mo(arlae)[Qâ ¢"(ae, ar) â Vo, (a2)] +E, (5] Vo log mo(a|21)[Qo, (1, a) â wo.ce) : (9) In the above expression, we have subtracted the classical baseline Vθv (xt) to reduce variance. , (9) recovers (off-policy) policy gradient up to the use It is interesting to note that, when c = of Retrace. When c = 0, (9) recovers an actor critic update that depends entirely on Q estimates. In the continuous control domain, (9) also generalizes Stochastic Value Gradients if c = 0 and the reparametrization trick is used to estimate its second term (Heess et al., 2015). 3.3 EFFICIENT TRUST REGION POLICY OPTIMIZATION The policy updates of actor-critic methods do often exhibit high variance. Hence, to ensure stability, we must limit the per-step changes to the policy. Simply using smaller learning rates is insufï¬ cient as they cannot guard against the occasional large updates while maintaining a desired learning speed. Trust Region Policy Optimization (TRPO) (Schulman et al., 2015a) provides a more adequate solution. Schulman et al. (2015a) approximately limit the difference between the updated policy and the current policy to ensure safety. Despite the effectiveness of their TRPO method, it requires repeated computation of Fisher-vector products for each update. This can prove to be prohibitively expensive in large domains. In this section we introduce a new trust region policy optimization method that scales well to large problems. Instead of constraining the updated policy to be close to the current policy (as in TRPO), we propose to maintain an average policy network that represents a running average of past policies and forces the updated policy to not deviate far from this average. | 1611.01224#16 | 1611.01224#18 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#18 | Sample Efficient Actor-Critic with Experience Replay | We decompose our policy network in two parts: a distribution f , and a deep neural network that gen- erates the statistics Ï Î¸(x) of this distribution. That is, given f , the policy is completely characterized by the network Ï Î¸: Ï ( Ï Î¸(x)). For example, in the discrete domain, we choose f to be the categorical distribution with a probability vector Ï Î¸(x) as its statistics. The probability vector is of course parameterised by θ. We denote the average policy network as Ï Î¸a and update its parameters θa â | 1611.01224#17 | 1611.01224#19 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#19 | Sample Efficient Actor-Critic with Experience Replay | softlyâ after each update to the policy parameter θ: θa â Consider, for example, the ACER policy gradient as deï¬ ned in Equation (9), but with respect to Ï : Ï Î¸(x))[Qret(xt, at) â # gacer t = PtV o(a:) log f(aildo(x))[Qâ ¢" (xe, at) _ Vo, (x1)] pila) â ¢ 5 ' +E | V g(a.) log f(at|b0(x)) (Qe, (4,4) â Vo,(xe)| }- 10) ann pr(a) + | 1611.01224#18 | 1611.01224#20 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#20 | Sample Efficient Actor-Critic with Experience Replay | Given the averaged policy network, our proposed trust region update involves two stages. In the ï¬ rst stage, we solve the following optimization problem with a linearized KL divergence constraint: Inimi 1 aacer 2 minimize <= -â 2Z Ir 2 ls lla an) subject to Vee) Decx [F(-1o. (1))IIFClea(@e))]" 2 <6 Ï Î¸a (xt)) ·| Since the constraint is linear, the overall optimization problem reduces to a simple quadratic program- ming problem, the solution of which can be easily derived in closed form using the KKT conditions. Letting k = Vg(0,) Dx [f(-16o, (e+) || i at = Get â | i KT gacet _ 5 at = Get â max {0, ao" z \ (12) Walla | 1611.01224#19 | 1611.01224#21 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#21 | Sample Efficient Actor-Critic with Experience Replay | This transformation of the gradient has a very natural form. If the constraint is satisfied, there is no change to the gradient with respect to ¢g(x,). Otherwise, the update is scaled down in the direction 5 Published as a conference paper at ICLR 2017 1 on-policy + 0 replay (A3C) 1 on-policy + 1 replay (ACER) 1 on-policy + 4 replay (ACER) 1 on-policy + 8 replay (ACER) DON Prioritized Replay Median (in Human) Median (in Human) ~ Million Steps Figure 1: ACER improvements in sample (LEFT) and computation (RIGHT) complexity on Atari. On each plot, the median of the human-normalized score across all 57 Atari games is presented for 4 ratios of replay with 0 replay corresponding to on-policy A3C. The colored solid and dashed lines represent ACER with and without trust region updating respectively. The environment steps are counted over all threads. The gray curve is the original DQN agent (Mnih et al., 2015) and the black curve is one of the Prioritized Double DQN agents from Schaul et al. (2016). of k, thus effectively lowering rate of change between the activations of the current policy and the average policy network. In the second stage, we take advantage of back-propagation. | 1611.01224#20 | 1611.01224#22 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#22 | Sample Efficient Actor-Critic with Experience Replay | Speciï¬ cally, the updated gradient with respect to Ï Î¸, that is zâ , is back-propagated through the network to compute the derivatives with respect to the parameters. The parameter updates for the policy network follow from the chain rule: â Ï Î¸(x) â θ zâ . The trust region step is carried out in the space of the statistics of the distribution f , and not in the space of the policy parameters. This is done deliberately so as to avoid an additional back-propagation step through the policy network. We would like to remark that the algorithm advanced in this section can be thought of as a general strategy for modifying the backward messages in back-propagation so as to stabilize the activations. Instead of a trust region update, one could alternatively add an appropriately scaled KL cost to the objective function as proposed by Heess et al. (2015). This approach, however, is less robust to the choice of hyper-parameters in our experience. The ACER algorithm results from a combination of the above ideas, with the precise pseudo-code appearing in Appendix A. A master algorithm (Algorithm 1) calls ACER on-policy to perform updates and propose trajectories. It then calls ACER off-policy component to conduct several replay steps. When on-policy, ACER effectively becomes a modiï¬ ed version of A3C where Q instead of V baselines are employed and trust region optimization is used. | 1611.01224#21 | 1611.01224#23 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#23 | Sample Efficient Actor-Critic with Experience Replay | # 4 RESULTS ON ATARI We use the Arcade Learning Environment of Bellemare et al. (2013) to conduct an extensive evaluation. We deploy one single algorithm and network architecture, with ï¬ xed hyper-parameters, to learn to play 57 Atari games given only raw pixel observations and game rewards. This task is highly demanding because of the diversity of games, and high-dimensional pixel-level observations. Our experimental setup uses 16 actor-learner threads running on a single machine with no GPUs. We adopt the same input pre-processing and network architecture as Mnih et al. (2015). | 1611.01224#22 | 1611.01224#24 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#24 | Sample Efficient Actor-Critic with Experience Replay | Speciï¬ cally, the network consists of a convolutional layer with 32 8 8 ï¬ lters with stride 4 followed by another convolutional layer with 64 4 4 ï¬ lters with stride 2, followed by a ï¬ nal convolutional layer with 64 3 ï¬ lters with stride 1, followed by a fully-connected layer of size 512. Each of the hidden layers 3 is followed by a rectiï¬ er nonlinearity. The network outputs a softmax policy and Q values. | 1611.01224#23 | 1611.01224#25 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#25 | Sample Efficient Actor-Critic with Experience Replay | 6 Published as a conference paper at ICLR 2017 When using replay, we add to each thread a replay memory that is up to 50 000 frames in size. The total amount of memory used across all threads is thus similar in size to that of DQN (Mnih et al., 2015). For all Atari experiments, we use a single learning rate adopted from an earlier implementation of A3C without further tuning. We do not anneal the learning rates over the course of training as in Mnih et al. (2016). We otherwise adopt the same optimization procedure as in Mnih et al. (2016). | 1611.01224#24 | 1611.01224#26 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#26 | Sample Efficient Actor-Critic with Experience Replay | Speciï¬ cally, we adopt entropy regularization with weight 0.001, discount the rewards with γ = 0.99, and perform updates every 20 steps (k = 20 in the notation of Section 2). In all our experiments with experience replay, we use importance weight truncation with c = 10. We consider training ACER both with and without trust region updating as described in Section 3.3. When trust region updating is used, we use δ = 1 and α = 0.99 for all experiments. To compare different agents, we adopt as our metric the median of the human normalized score over all 57 games. The normalization is calculated such that, for each game, human scores and random scores are evaluated to 1, and 0 respectively. The normalized score for a given game at time t is computed as the average normalized score over the past 1 million consecutive frames encountered until time t. For each agent, we plot its cumulative maximum median score over time. The result is summarized in Figure 1. The four colors in Figure 1 correspond to four replay ratios (0, 1, 4 and 8) with a ratio of 4 meaning that we use the off-policy component of ACER 4 times after using the on-policy component (A3C). That is, a replay ratio of 0 means that we are using A3C. The solid and dashed lines represent ACER with and without trust region updating respectively. The gray and black curves are the original DQN (Mnih et al., 2015) and Prioritized Replay agent of Schaul et al. (2016) agents respectively. As shown on the left panel of Figure 1, replay signiï¬ cantly increases data efï¬ ciency. | 1611.01224#25 | 1611.01224#27 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#27 | Sample Efficient Actor-Critic with Experience Replay | We observe that when using the trust region optimizer, the average reward as a function of the number of environmental steps increases with the ratio of replay. This increase has diminishing returns, but with enough replay, ACER can match the performance of the best DQN agents. Moreover, it is clear that the off-policy actor critics (ACER) are much more sample efï¬ cient than their on-policy counterpart (A3C). The right panel of Figure 1 shows that ACER agents perform similarly to A3C when measured by wall clock time. Thus, in this case, it is possible to achieve better data-efï¬ ciency without necessarily compromising on computation time. In particular, ACER with a replay ratio of 4 is an appealing alternative to either the prioritized DQN agent or A3C. # 5 CONTINUOUS ACTOR CRITIC WITH EXPERIENCE REPLAY Retrace requires estimates of both Q and V , but we cannot easily integrate over Q to derive V in continuous action spaces. In this section, we propose a solution to this problem in the form of a novel representation for RL, as well as modiï¬ | 1611.01224#26 | 1611.01224#28 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#28 | Sample Efficient Actor-Critic with Experience Replay | cations necessary for trust region updating. 5.1 POLICY EVALUATION Retrace provides a target for learning Qθv , but not for learning Vθv . We could use importance sampling to compute Vθv given Qθv , but this estimator has high variance. We propose a new architecture which we call Stochastic Dueling Networks (SDNs), inspired by the Dueling networks of Wang et al. (2016), which is designed to estimate both V Ï and QÏ off-policy while maintaining consistency between the two estimates. At each time step, an SDN outputs a Qθv of QÏ and a deterministic estimate Vθv of V Ï , such that stochastic estimate 1 n # i=l where n is a parameter, see Figure The two estimates are consistent in the sense that gwn(-es) [Eu nnn(-lee) (Q. (x2, a)) = Vo, (x:). Furthermore, we can learn about Vâ by learn- ing Qo: To see this, assume we have learned Q" perfectly such that E,,,.,, vr(-|x1) (Qs. (xe, a)) = Qâ ¢ (at, ax), then Vo, (21) = Eqrn(-|x,) [ExinnmClee) (Q, («1,4)) = Eann(-je,) (Q7 (tt, @)] = Vâ ¢ (az). Therefore, a target on Qo, (xz, a) also provides an error signal for updating Vo, . | 1611.01224#27 | 1611.01224#29 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#29 | Sample Efficient Actor-Critic with Experience Replay | 7 Published as a conference paper at ICLR 2017 Ag, ( % a [urs+++ 5 Un) Figure 2: A schematic of the Stochastic Dueling Network. In the drawing, [u1, to be samples from Ï Î¸( real sizes of the networks used. , un] are assumed xt). This schematic illustrates the concept of SDNs but does not reï¬ ect the · · · ·| In addition to SDNs, however, we also construct the following novel target for estimating V Ï : Veerset(,) = min {1 meee (O21. 40) â Qo.(e1.a)) +Va,(e0). (14) H(ae|) is also derived via the truncation and bias correction trick; for more details, see The above target is also derived via the truncation and bias correction trick; for more details, see Appendix D. Finally, when estimating Qret in continuous domains, we implement a slightly different formulation Finally, when estimating Q'*' in continuous domains, we implement a slightly different formulation T(ae|r4) H(ae|xe) the action space. Although not essential, we have found this formulation to lead to faster learning. 1 of the truncated importance weights p, = min {1 ( ) a \ where d is the dimensionality of | 1611.01224#28 | 1611.01224#30 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#30 | Sample Efficient Actor-Critic with Experience Replay | 5.2 TRUST REGION UPDATING To adopt the trust region updating scheme (Section 3.3) in the continuous control domain, one simply has to choose a distribution f and a gradient speciï¬ cation Ë gacer suitable for continuous action spaces. For the distribution f , we choose Gaussian distributions with ï¬ xed diagonal covariance and mean Ï Î¸(x). To derive Ë gacer dueling network, but with respect to Ï : git = B,, [Ee [Ponce 108 F(aildo(21))(QP"(er,a0) - %a.(e9)| +E ann â â â oo) (Qo. 9) â Vag #0) Votre) 108 Heute) - (1s) In the above definition, we are using Q°P° instead of Qâ ¢. Here, Q°P°(x;, az) is the same as Retrace with the exception that the truncated importance ratio is replaced with 1 (Harutyunyan et al.|/2016). Please refer to Appendix [B]an expanded discussion on this design choice. Given an observation x;, we can sample aj, ~ 79(-|x1) to obtain the following Monte Carlo approximation xt) to obtain the following Monte Carlo approximation Ï Î¸(xt))(Qopc(xt, at) ~ 79(-|x1) following approximation = PrVoo(xr) log f (aelbo(a1))(QP* (a1, ar) â Vo, (21)) 4+ [BOP] Ge. (ara) â Vox (e1)) Fontes low Hat loete)). 16 pray) + # Ë | 1611.01224#29 | 1611.01224#31 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#31 | Sample Efficient Actor-Critic with Experience Replay | gacer t + Given f and Ë gacer , we apply the same steps as detailed in Section 3.3 to complete the update. # t The precise pseudo-code of ACER algorithm for continuous spaces results is presented in Appendix A. 8 Published as a conference paper at ICLR 2017 Walker2d (9-DoF /6-dim. Actions) Fish (13-DoF/5-dim. Actions) Cartpole (2-DoF/I-dim. Actions) ) Milion Steps Million Steps Milion Steps Millon Steps Million Steps Episode Rewards Million Steps Humanoid (27-DoF /21-dim. Actionsâ Reacher3 (3-DoF /3-dim. Actions) Cheetah (9-DoF /6-dim. Actions) Episode Rewards Figure 3: [TOP] Screen shots of the continuous control tasks. [BOTTOM] Performance of different methods on these tasks. ACER outperforms all other methods and shows clear gains for the higher- dimensionality tasks (humanoid, cheetah, walker and ï¬ | 1611.01224#30 | 1611.01224#32 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#32 | Sample Efficient Actor-Critic with Experience Replay | sh). The proposed trust region method by itself improves the two baselines (truncated importance sampling and A3C) signiï¬ cantly. # 6 RESULTS ON MUJOCO We evaluate our algorithms on 6 continuous control tasks, all of which are simulated using the MuJoCo physics engine (Todorov et al., 2012). For descriptions of the tasks, please refer to Appendix E.1. Brieï¬ y, the tasks with action dimensionality in brackets are: cartpole (1D), reacher (3D), cheetah (6D), ï¬ sh (5D), walker (6D) and humanoid (21D). These tasks are illustrated in Figure 3. | 1611.01224#31 | 1611.01224#33 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#33 | Sample Efficient Actor-Critic with Experience Replay | To benchmark ACER for continuous control, we compare it to its on-policy counterpart both with and without trust region updating. We refer to these two baselines as A3C and Trust-A3C. Additionally, we also compare to a baseline with replay where we truncate the importance weights over trajectories as in (Wawrzy´nski, 2009). For a detailed description of this baseline, please refer to Appendix E. Again, we run this baseline both with and without trust region updating, and refer to these choices as Trust-TIS and TIS respectively. Last but not least, we refer to our proposed approach with SDN and trust region updating as simply ACER. | 1611.01224#32 | 1611.01224#34 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#34 | Sample Efficient Actor-Critic with Experience Replay | All ï¬ ve setups are implemented in the asynchronous A3C framework. All the aforementioned setups share the same network architecture that computes the policy and state values. We maintain an additional small network that computes the stochastic A values in the case of ACER. We use n = 5 (using the notation in Equation (13)) in all SDNs. Instead of mixing on-policy and replay learning as done in the Atari domain, ACER for continuous actions is entirely off-policy, with experiences generated from the simulator (4 times on average). When using replay, we add to each thread a replay memory that is 5, 000 frames in size and perform updates every 50 steps (k = 50 in the notation of Section 2). The rate of the soft updating (α as in Section 3.3) is set to 0.995 in all setups involving trust region updating. The truncation threshold c is set to 5 for ACER. | 1611.01224#33 | 1611.01224#35 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#35 | Sample Efficient Actor-Critic with Experience Replay | 9 Published as a conference paper at ICLR 2017 We use diagonal Gaussian policies with ï¬ xed diagonal covariances where the diagonal standard deviation is set to 0.3. For all setups, we sample the learning rates log-uniformly in the range [10â 4, 10â 3.3]. For setups involving trust region updating, we also sample δ uniformly in the range [0.1, 2]. With all setups, we use 30 sampled hyper-parameter settings. The empirical results for all continuous control tasks are shown Figure 3, where we show the mean and standard deviation of the best 5 out of 30 hyper-parameter settings over which we searched 3. For sensitivity analyses with respect to the hyper-parameters, please refer to Figures 5 and 6 in the Appendix. In continuous control, ACER outperforms the A3C and truncated importance sampling baselines by a very signiï¬ | 1611.01224#34 | 1611.01224#36 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#36 | Sample Efficient Actor-Critic with Experience Replay | cant margin. Here, we also ï¬ nd that the proposed trust region optimization method can result in huge improvements over the baselines. The high-dimensional continuous action policies are much harder to optimize than the small discrete action policies in Atari, and hence we observe much higher gains for trust region optimization in the continuous control domains. In spite of the improvements brought in by trust region optimization, ACER still outperforms all other methods, specially in higher dimensions. # 6.1 ABLATIONS To further tease apart the contributions of the different components of ACER, we conduct an ablation analysis where we individually remove Retrace / Q(λ) off-policy correction, SDNs, trust region, and truncation with bias correction from the algorithm. As shown in Figure 4, Retrace and off- policy correction, SDNs, and trust region are critical: removing any one of them leads to a clear deterioration of the performance. Truncation with bias correction did not alter the results in the Fish and Walker2d tasks. However, in Humanoid, where the dimensionality of the action space is much higher, including truncation and bias correction brings a signiï¬ cant boost which makes the originally kneeling humanoid stand. Presumably, the high dimensionality of the action space increases the variance of the importance weights which makes truncation with bias correction important. | 1611.01224#35 | 1611.01224#37 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#37 | Sample Efficient Actor-Critic with Experience Replay | For more details on the experimental setup please see Appendix E.4. # 7 THEORETICAL ANALYSIS Retrace is a very recent development in reinforcement learning. In fact, this work is the ï¬ rst to consider Retrace in the policy gradients setting. For this reason, and given the core role that Retrace plays in ACER, it is valuable to shed more light on this technique. In this section, we will prove that Retrace can be interpreted as an application of the importance weight truncation and bias correction trick advanced in this paper. | 1611.01224#36 | 1611.01224#38 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#38 | Sample Efficient Actor-Critic with Experience Replay | Consider the following equation: QÏ (xt, at) = Ext+1at+1 [rt + Î³Ï t+1QÏ (xt+1, at+1)] . (17) If we apply the weight truncation and bias correction trick to the above equation we obtain Q" (xt, a) = Be siares ret 7er4+1Q" (41,4141) +7 E [aa â â | Q* (x141,4) } | - ann pr41(a) + By recursively expanding QÏ as in Equation (18), we can represent QÏ (x, a) as: pusi(b) =e â ¢(¢,a) =E yt pi\(retyE [| * (rp415b . 19 Q* (2,4) = Ey X17 (1) (: â , ( perild) 2 (2141, 0) (19) â ¢(¢,a) =E Q* (2,4) = Ey X17 The expectation Eµ is taken over trajectories starting from x with actions generated with respect to µ. When QÏ is not available, we can replace it with our current estimate Q to get a return-based | 1611.01224#37 | 1611.01224#39 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#39 | Sample Efficient Actor-Critic with Experience Replay | 3 For videos of the policies learned with ACER, please see: https://www.youtube.com/watch?v= NmbeQYoVv5g&list=PLkmHIkhlFjiTlvwxEnsJMs3v7seR5HSP-. 10 (18) Published as a conference paper at ICLR 2017 Fish Walker2d Humanoid n o i g e R t s u r T o N s N D S o N r o n e c a r t e R o N . r r o C y c i l o P - f f O n o i t a c n u r T o N . r r o C s a i B & Figure 4: | 1611.01224#38 | 1611.01224#40 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#40 | Sample Efficient Actor-Critic with Experience Replay | Ablation analysis evaluating the effect of different components of ACER. Each row compares ACER with and without one component. The columns represents three control tasks. Red lines, in all plots, represent ACER whereas green lines ACER with missing components. This study indicates that all 4 components studied improve performance where 3 are critical to success. Note that the ACER curve is of course the same in all rows. esitmate of QÏ . This operation also deï¬ nes an operator: - (Ta) (1 pil) =e] BQ(z,a) =E, > ( [#) (n+, ([ pirild) ix) . (20) t>0 i= BQ(z,a) =E, > t>0 # B # i= is a contraction operator with a unique ï¬ xed point QÏ B | 1611.01224#39 | 1611.01224#41 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#41 | Sample Efficient Actor-Critic with Experience Replay | In the following proposition, we show that and that it is equivalent to the Retrace operator. Proposition 1. The operator and # QÏ Proposition 1. The operator B is a contraction operator such that \|BQ â Qâ ¢\|oo < y|\|Q â Q* and B is equivalent to Retrace. loo # B # QÏ The above proposition not only shows an alternative way of arriving at the same operator, but also provides a different proof of contraction for Retrace. Please refer to Appendix C for the regularization conditions and proof of the above proposition. | 1611.01224#40 | 1611.01224#42 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#42 | Sample Efficient Actor-Critic with Experience Replay | Ï and importance sampling. recovers importance sampling; see Finally, Speciï¬ cally, when c = 0, Appendix C. , and therefore Retrace, generalizes both the Bellman operator B T Ï and when c = , = B T â B 11 Published as a conference paper at ICLR 2017 # 8 CONCLUDING REMARKS We have introduced a stable off-policy actor critic that scales to both continuous and discrete action spaces. This approach integrates several recent advances in RL in a principle manner. In addition, it integrates three innovations advanced in this paper: truncated importance sampling with bias correction, stochastic dueling networks and an efï¬ cient trust region policy optimization method. We showed that the method not only matches the performance of the best known methods on Atari, but that it also outperforms popular techniques on several continuous control problems. | 1611.01224#41 | 1611.01224#43 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#43 | Sample Efficient Actor-Critic with Experience Replay | The efï¬ cient trust region optimization method advanced in this paper performs remarkably well in continuous domains. It could prove very useful in other deep learning domains, where it is hard to stabilize the training process. # ACKNOWLEDGMENTS We are very thankful to Marc Bellemare, Jascha Sohl-Dickstein, and S´ebastien Racaniere for proof- reading and valuable suggestions. # REFERENCES M. G. Bellemare, Y. Naddaf, J. Veness, and M. | 1611.01224#42 | 1611.01224#44 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#44 | Sample Efficient Actor-Critic with Experience Replay | Bowling. The arcade learning environment: An evaluation platform for general agents. JAIR, 47:253â 279, 2013. G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba. OpenAI Gym. arXiv preprint 1606.01540, 2016. T. Degris, M. White, and R. S. Sutton. | 1611.01224#43 | 1611.01224#45 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#45 | Sample Efficient Actor-Critic with Experience Replay | Off-policy actor-critic. In ICML, pp. 457â 464, 2012. Anna Harutyunyan, Marc G Bellemare, Tom Stepleton, and Remi Munos. Q (λ) with off-policy corrections. arXiv preprint arXiv:1602.04951, 2016. N. Heess, G. Wayne, D. Silver, T. Lillicrap, T. Erez, and Y. | 1611.01224#44 | 1611.01224#46 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#46 | Sample Efficient Actor-Critic with Experience Replay | Tassa. Learning continuous control policies by stochastic value gradients. In NIPS, 2015. T. Jie and P. Abbeel. On a connection between importance sampling and the likelihood ratio policy gradient. In NIPS, pp. 1000â 1008, 2010. S. Levine and V. Koltun. Guided policy search. In ICML, 2013. S. Levine, C. Finn, T. Darrell, and P. Abbeel. | 1611.01224#45 | 1611.01224#47 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#47 | Sample Efficient Actor-Critic with Experience Replay | End-to-end training of deep visuomotor policies. arXiv preprint arXiv:1504.00702, 2015. T. Lillicrap, J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra. Continuous control with deep reinforcement learning. arXiv:1509.02971, 2015. L.J. Lin. | 1611.01224#46 | 1611.01224#48 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#48 | Sample Efficient Actor-Critic with Experience Replay | Self-improving reactive agents based on reinforcement learning, planning and teaching. Machine learning, 8(3):293â 321, 1992. N. Meuleau, L. Peshkin, L. P. Kaelbling, and K. Kim. Off-policy policy search. Technical report, MIT AI Lab, 2000. V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis. Human-level control through deep reinforcement learning. | 1611.01224#47 | 1611.01224#49 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#49 | Sample Efficient Actor-Critic with Experience Replay | Nature, 518(7540): 529â 533, 2015. V. Mnih, A. Puigdom`enech Badia, M. Mirza, A. Graves, T. P. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu. Asynchronous methods for deep reinforcement learning. arXiv:1602.01783, 2016. R. Munos, T. Stepleton, A. Harutyunyan, and M. G. | 1611.01224#48 | 1611.01224#50 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#50 | Sample Efficient Actor-Critic with Experience Replay | Bellemare. Safe and efï¬ cient off-policy reinforcement learning. arXiv preprint arXiv:1606.02647, 2016. K. Narasimhan, T. Kulkarni, and R. Barzilay. Language understanding for text-based games using deep reinforcement learning. In EMNLP, 2015. 12 Published as a conference paper at ICLR 2017 J. Oh, V. Chockalingam, S. P. Singh, and H. Lee. | 1611.01224#49 | 1611.01224#51 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#51 | Sample Efficient Actor-Critic with Experience Replay | Control of memory, active perception, and action in Minecraft. In ICML, 2016. D. Precup, R. S. Sutton, and S. Singh. Eligibility traces for off-policy policy evaluation. In ICML, pp. 759â 766, 2000. T. Schaul, J. Quan, I. Antonoglou, and D. Silver. Prioritized experience replay. In ICLR, 2016. J. Schulman, S. Levine, P. Abbeel, M. I. Jordan, and P. Moritz. | 1611.01224#50 | 1611.01224#52 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#52 | Sample Efficient Actor-Critic with Experience Replay | Trust region policy optimization. In ICML, 2015a. J. Schulman, P. Moritz, S. Levine, M. I. Jordan, and P. Abbeel. High-dimensional continuous control using generalized advantage estimation. arXiv:1506.02438, 2015b. D. Silver, G. Lever, N. Heess, T. Degris, D. Wierstra, and M. Riedmiller. | 1611.01224#51 | 1611.01224#53 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#53 | Sample Efficient Actor-Critic with Experience Replay | Deterministic policy gradient algorithms. In ICML, 2014. D. Silver, A. Huang, C.J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis. | 1611.01224#52 | 1611.01224#54 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#54 | Sample Efficient Actor-Critic with Experience Replay | Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587):484â 489, 2016. R. S. Sutton, D. Mcallester, S. Singh, and Y. Mansour. Policy gradient methods for reinforcement learning with function approximation. In NIPS, pp. 1057â 1063, 2000. E. Todorov, T. Erez, and Y. Tassa. MuJoCo: | 1611.01224#53 | 1611.01224#55 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#55 | Sample Efficient Actor-Critic with Experience Replay | A physics engine for model-based control. In International Conference on Intelligent Robots and Systems, pp. 5026â 5033, 2012. Z. Wang, T. Schaul, M. Hessel, H. van Hasselt, M. Lanctot, and N. de Freitas. Dueling network architectures for deep reinforcement learning. In ICML, 2016. P. Wawrzy´nski. Real-time reinforcement learning by sequential actorâ critics and experience replay. Neural Networks, 22(10):1484â 1497, 2009. 13 | 1611.01224#54 | 1611.01224#56 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#56 | Sample Efficient Actor-Critic with Experience Replay | Published as a conference paper at ICLR 2017 # A ACER PSEUDO-CODE FOR DISCRETE ACTIONS # Algorithm 1 ACER for discrete actions (master algorithm) // Assume global shared parameter vectors θ and θv. // Assume ratio of replay r. repeat Call ACER on-policy, Algorithm 2. n â Possion(r) for i â {1, · · · , n} do Call ACER off-policy, Algorithm 2. end for until Max iteration or time reached. Algorithm 2 ACER for discrete actions Reset gradients d@ < 0 and d6, < 0. Initialize parameters 6â < @ and 6â , < Oy. if not On-Policy then Sample the trajectory {xo, a0, 70, H(-|Zo),++* , Lk, Ak, Tk, H(-|ex)} from the replay memory. else Get state xo end if fori â ¬ {0,--- ,k} do Compute f(-/doâ (#:)), Qoz (ws,-) and f(-\do, (#:)). if On-Policy then Perform a; according to f(-|d9(xi)) Receive reward r; and new state 741 H(i) â f(-|bor(2:)) end if ~ . Flaildor (wid) 6: = min {1 ee}. end for e. 0 for terminal x, Q et ae Ma Qo, (xk, a) f(alper(xe)) otherwise fori ¢ {kâ 1,--- ,0}do Qret er +7Qre@ Vi â Ya Qar, (xi, a) f (algo (xi) Computing quantities needed for trust region updating: g = min fe, pi(ai)} Voq,(e;) 108 f(aildo (2i))(Qâ ¢ â Vi) +X [P= SEF] fale Woy cay tw falter ()) (Qe, 20.08) ~ Ve) platy ke Vo q (ei) Pvce [FC /b00 (ea) IF C1Go" (#2) a Accumulate gradients wrt 6â : d0â < d6â | 1611.01224#55 | 1611.01224#57 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#57 | Sample Efficient Actor-Critic with Experience Replay | + 2G ares) (9 - max { : sae} r) 2 Accumulate gradients wrt 0/,: dO. <â d@y + Vor, (qr â Qor (zi, a))? Update Retrace target: Q" © p; (qr â Qo, (xi, ai)) + Vi end for end for Perform asynchronous update of θ using dθ and of θv using dθv. Updating the average policy network: θa â αθa + (1 â α)θ # B Q(λ) WITH OFF-POLICY CORRECTIONS Given a trajectory generated under the behavior policy µ, the Q(λ) with off-policy corrections estimator (Harutyunyan et al., 2016) can be expressed recursively as follows: # Qopc(xt, at) = rt + γ[Qopc(xt+1, at+1) (21) Notice that Qopc(xt, at) is the same as Retrace with the exception that the truncated importance ratio is replaced with 1. | 1611.01224#56 | 1611.01224#58 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#58 | Sample Efficient Actor-Critic with Experience Replay | â 14 Published as a conference paper at ICLR 2017 Algorithm 3 ACER for Continuous Actions Reset gradients d@ < 0 and d6, < 0. Initialize parameters 0â ~ 6 and 6â , + Oy. Sample the trajectory {xo, ao, To, H(-|o), +++ , Tk, Ak, Tk, L(-|2~)} from the replay memory. fori â ¬ {0,--- ,k} do # v â θv. Compute f(-[éo(:)). Vor Sample aj ~ f(-|d97(xi)) flailegs (wi) ia (ai lei) and p; (ai lei) # 1 d {1, (o:)*}. # ci â min 1, (Ï i) . # (ws). Quy # v # (xi, ai), and f (·|Ï Î¸a (xi)). # F(ai|bor(@i)) n(at |x; ) # end for °. 0 for terminal x, ret ee Q Vor (wx) otherwise Qe ee qr fori ¢ {kâ 1,--- ,0}do Qrt eri +7Qr Qe ar HQ Computing quantities needed for trust region updating: # for terminal xk # Qret â # (xk) otherwise g © min {c, pi} Vsy,(xi) log f(ailbor (xi) (Q°° (ai, a2) â Vor, (a2)) + : - <| (Qor, (wi, a4) â Vor, (i) Vos (wi) log F (ailbo" (:)) Pils i ke Vb (ei DKx [F (1600 (2) NF ldo" (xi) Accumulate gradients wrt 6: d@ ~â dO + Oe or (aa) (9 â max {0, saa} k) ~ Welz Accumulate gradients wrt 6/,: d0, <â dO, + (Qrâ ¢ â Qor, (i, ai)) Vor, Qor, (i, ai) dB â dB. + min {1, pi} (Q"*(ae,as) â | 1611.01224#57 | 1611.01224#59 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#59 | Sample Efficient Actor-Critic with Experience Replay | Quy (we, a)) Vor, Vo, (xs) Update Retrace target: Qâ < c; (Ce - Qo, (xi, a)) + Vor (xi) Update Retrace target: Q°?° â (Qâ " _ Qor, (xi, ai)) + Vor (xi) end for Perform asynchronous update of 6 using d@ and of 6, using d6,. Updating the average policy network: 0, < a0, + (1â a)@ Because of the lack of the truncated importance ratio, the operator deï¬ ned by Qopc is only a contraction if the target and behavior policies are close to each other (Harutyunyan et al., 2016). Q(λ) with off-policy corrections is therefore less stable compared to Retrace and unsafe for policy evaluation. Qopc, however, could better utilize the returns as the traces are not cut by the truncated importance weights. As a result, Qopc could be used efï¬ ciently to estimate QÏ in policy gradient (e.g. in Equation (16)). In our continuous control experiments, we have found that Qopc leads to faster learning. C RETRACE AS TRUNCATED IMPORTANCE SAMPLING WITH BIAS CORRECTION For the purpose of proving proposition 1, we assume our environment to be a Markov Decision to be a ï¬ nite state space. For notational simplicity, we also Process ( restrict deï¬ nes the state transition probabilities and , r : | 1611.01224#58 | 1611.01224#60 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#60 | Sample Efficient Actor-Critic with Experience Replay | 15 Published as a conference paper at ICLR 2017 Proof of proposition 1. First we show that is a contraction operator. # B < E& < BQ(x, a) â Q"(2,a)| pr+1(0) â â | pr+i(b) + pesa(b) â â | | 4 pr+i(b) (Q(we41,b) â Q* (rsa, »)) Q(t141,0) â Q* (@141, ni)| a) (a â Pry) sup 1Q (241, 0) â Q* (@141, 0) (22) < E& < Where P;+1 1_E baw due to Hélderâ s inequality. Where P;+1 1_E a baw due to Hélderâ s inequality. +41 (0) | iE [Pt41(b)]. The last inequality in the above equation is + bw (22) IA su su p xb Q(e,b) Q(e,b) Q(e,b) Q(e,b) ~ Q* (x, b) E, ~ Q"(«,b)|E, ~ Q"(«,b)|E, â Q*(x,b) | 1611.01224#59 | 1611.01224#61 | 1611.01224 | [
"1602.01783"
]
|
1611.01224#61 | Sample Efficient Actor-Critic with Experience Replay | where C = 37157 (Mn 71). Since C > D}_9 7 (Mn 7) = 1, we have that yCâ (C'â 1) < y. Therefore, we have shown that B is a contraction operator. shown that B is a contraction operator. is the same as Retrace. By apply the trunction and bias correction trick, we have # B Now we show that B B [Q(xt+1, b)] = E bâ | 1611.01224#60 | 1611.01224#62 | 1611.01224 | [
"1602.01783"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.