id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
listlengths 1
1
|
---|---|---|---|---|---|---|
1611.02205#21 | Playing SNES in the Retro Learning Environment | t affect a humanâ s ability to play the game, therefore suitable for RL algorithms as well. To handle the large action space, we limited the algorithmâ s actions to the minimal button combinations which provide unique behavior. For example, on many games the R and L action buttons donâ t have any use therefore their use and combinations were omitted. 4.1.1 RESULTS A thorough comparison of the four different agentsâ performances on SNES games can be seen in Figure 2 The full results can be found in Table (3). Only in the game Mortal Kombat a trained agent was able to surpass a expert human player performance as opposed to Atari games where the same algorithms have surpassed a human player on the vast majority of the games. One example is Wolfenstein game, a 3D first-person shooter game, requires solving 3D vision tasks, navigating in a maze and detecting object. As evident from figure 2). all agents produce poor results indicating a lack of the required properties. By using e-greedy approach the agents werenâ t able to explore enough states (or even other rooms in our case). The algorithmâ s final policy appeared as a random walk in a 3D space. Exploration based on visited states such as presented in[Bellemare] might help addressing this issue. An interesting case is Gradius III, a side-scrolling, flight-shooter game. While the trained agent was able to master the technical aspects of the game, which includes shooting incoming enemies and dodging their projectiles, itâ s final score is still far from a humanâ | 1611.02205#20 | 1611.02205#22 | 1611.02205 | [
"1609.05143"
]
|
1611.02205#22 | Playing SNES in the Retro Learning Environment | s. This is due to a hidden game mechanism in the form of â power-upsâ , which can be accumulated, and significantly increase the players abilities. The more power-ups collected without being use â the larger their final impact will be. While this game-mechanism is evident to a human, the agent acts myopically and uses the power-up straight awaâ 4.2 REWARD SHAPING As part of the environment and algorithm evaluation process, we investigated two case studies. First is a game on which DQN had failed to achieve a better-than-random score, and second is a game on which the training duration was signiï¬ cantly longer than that of other games. In the first case study, we used a 2D back-view racing game F-Zeroâ | 1611.02205#21 | 1611.02205#23 | 1611.02205 | [
"1609.05143"
]
|
1611.02205#23 | Playing SNES in the Retro Learning Environment | . In this game, one is required to complete four laps of the track while avoiding other race cars. The reward, as defined by the score of the game, is only received upon completing a lap. This is an extreme case of a reward delay. A lap may last as long as 30 seconds, which span over 450 states (actions) before reward is received. Since DQNâ s exploration is a simple e-greedy approach, it was not able to produce a useful strategy. We approached this issue using reward shaping, essentially a modification of the reward to be a function of the reward and the observation, rather than the reward alone. Here, we define the reward to be the sum of the score and the agentâ s speed (a metric displayed on the screen of the game). Indeed when the reward was defined as such, the agents learned to finish the race in first place within a short training period. | 1611.02205#22 | 1611.02205#24 | 1611.02205 | [
"1609.05143"
]
|
1611.02205#24 | Playing SNES in the Retro Learning Environment | The second case study is the famous game of Super Mario. In this game the agent, Mario, is required to reach the right-hand side of the screen, while avoiding enemies and collecting coins. We found this case interesting as it involves several challenges at once: dynamic background that can change drastically within a level, sparse and delayed rewards and multiple tasks (such as avoiding enemies and pits, advancing rightwards and collecting coins). To our surprise, DQN was able to reach the end of the level without any reward shaping, this was possible since the agent receives rewards for events (collecting coins, stomping on enemies etc.) that tend to appear to the right of the player, causing the agent to prefer moving right. | 1611.02205#23 | 1611.02205#25 | 1611.02205 | [
"1609.05143"
]
|
1611.02205#25 | Playing SNES in the Retro Learning Environment | However, the training time required for convergence was signiï¬ cantly longer than other games. We deï¬ ned the reward as the sum of the in-game reward and a bonus granted according the the playerâ s position, making moving right preferable. This reward 5A video demonstration can be found at https://youtu.be/nUl9XLMveEU 6 RLE Benchmarks 120 100 m DQN â ¢ D-DQN â ¢ Duel-DDQN Normalized Score °°E-Zero (speed bonus) Gradius 3 Mortal Kombat Super Mario Wolfenstein Algorithms Figure 2: DQN, DDQN and Duel-DDQN performance. Results were normalized by subtracting the a random agentâ s score and dividing by the human player score. Thus 100 represents a human player and zero a random agent. | 1611.02205#24 | 1611.02205#26 | 1611.02205 | [
"1609.05143"
]
|
1611.02205#26 | Playing SNES in the Retro Learning Environment | proved useful, as training time required for convergence decreased signiï¬ cantly. The two games above can be seen in Figure (3). Figure (4) illustrates the agentâ s average value function . Though both were able complete the stage trained upon, the convergence rate with reward shaping is signiï¬ cantly quicker due to the immediate realization of the agent to move rightwards. oo000 ee ne â pugs C1 0°00" Figure 3: Left: | 1611.02205#25 | 1611.02205#27 | 1611.02205 | [
"1609.05143"
]
|
1611.02205#27 | Playing SNES in the Retro Learning Environment | The game Super Mario with added bonus for moving right, enabling the agent to master them game after less training time. Right: The game F-Zero. By granting a reward for speed the agent was able to master this game, as oppose to using solely the in-game reward. 7 Super Mario Reward Shaping Comparison og Averaged Action Value (Q) g 02 â Super Mario With Right Bonus â Super Mario Without Right Bonus Epoch Figure 4: Averaged action-value (Q) for Super Mario trained with reward bonus for moving right (blue) and without (red). 4.3 MULTI-AGENT REINFORCEMENT LEARNING In this section we describe our experiments with RLEâ s multi-agent capabilities. We consider the case where the number of agents, n = 2 and the goals of the agents are opposite, as in r1 = â r2. This scheme is known as fully competitive (Bus¸oniu et al., 2010). We used the simple single- agent RL approach (as described by Bus¸oniu et al. (2010) section 5.4.1) which is to apply to sin- gle agent approach to the multi-agent case. This approach was proved useful in Crites and Barto (1996) and Matari´c (1997). More elaborate schemes are possible such as the minimax-Q algo- rithm (Littman, 1994), (Littman, 2001). These may be explored in future works. We conducted three experiments on this setup: the ï¬ rst use was to train two different agents against the in-game AI, as done in previous sections, and evaluate their performance by letting them compete against each other. Here, rather than achieving the highest score, the goal was to win a tournament which consist of 50 rounds, as common in human-player competitions. The second experiment was to initially train two agents against the in-game AI, and resume the training while competing against each other. In this case, we evaluated the agent by playing again against the in-game AI, separately. Finally, in our last experiment we try to boost the agent capabilities by alternated itâ s opponents, switching between the in-game AI and other trained agents. 4.3.1 MULTI-AGENT REINFORCEMENT LEARNING RESULTS We chose the game Mortal Kombat, a two character side viewed ï¬ | 1611.02205#26 | 1611.02205#28 | 1611.02205 | [
"1609.05143"
]
|
1611.02205#28 | Playing SNES in the Retro Learning Environment | ghting game (a screenshot of the game can be seen in Figure (1), as a testbed for the above, as it exhibits favorable properties: both players share the same screen, the agentâ s optimal policy is heavily dependent on the rivalâ s behavior, unlike racing games for example. In order to evaluate two agents fairly, both were trained using the same characters maintaining the identity of rival and agent. Furthermore, to remove the impact of the starting positions of both agents on their performances, the starting positions were initialized randomly. | 1611.02205#27 | 1611.02205#29 | 1611.02205 | [
"1609.05143"
]
|
1611.02205#29 | Playing SNES in the Retro Learning Environment | In the ï¬ rst experiment we evaluated all combinations of DQN against D-DQN and Dueling D-DQN. Each agent was trained against the in-game AI until convergence. Then 50 matches were performed between the two agents. DQN lost 28 out of 50 games against Dueling D-DQN and 33 against D-DQN. D-DQN lost 26 time to Dueling D-DQN. This win balance isnâ t far from the random case, since the algorithms converged into a policy in which movement towards the opponent is not | 1611.02205#28 | 1611.02205#30 | 1611.02205 | [
"1609.05143"
]
|
1611.02205#30 | Playing SNES in the Retro Learning Environment | 8 required rather than generalize the game. Therefore, in many episodes, little interaction between the two agents occur, leading to a semi-random outcome. In our second experiment, we continued the training process of a the D-DQN network by letting it compete against the Dueling D-DQN network. We evaluated the re-trained network by playing 30 episodes against the in-game AI. After training, D-DQN was able to win 28 out of 30 games, yet when faced again against the in-game AI its performance deteriorated drastically (from an average of 17000 to an average of -22000). This demonstrated a form of catastrophic forgetting (Goodfellow et al., 2013) even though the agents played the same game. In our third experiment, we trained a Dueling D-DQN agent against three different rivals: the in- game AI, a trained DQN agent and a trained Dueling-DQN agent, in an alternating manner, such that in each episode a different rival was playing as the opponent with the intention of preventing the agent from learning a policy suitable for just one opponent. The new agent was able to achieve a score of 162,966 (compared to the â | 1611.02205#29 | 1611.02205#31 | 1611.02205 | [
"1609.05143"
]
|
1611.02205#31 | Playing SNES in the Retro Learning Environment | normalâ dueling D-DQN which achieved 169,633). As a new and objective measure of generalization, weâ ve conï¬ gured the in-game AI difï¬ culty to be â very hardâ (as opposed to the default â mediumâ difï¬ culty). In this metric the alternating version achieved 83,400 compared to -33,266 of the dueling D-DQN which was trained in default setting. Thus, proving that the agent learned to generalize to other policies which werenâ t observed while training. | 1611.02205#30 | 1611.02205#32 | 1611.02205 | [
"1609.05143"
]
|
1611.02205#32 | Playing SNES in the Retro Learning Environment | 4.4 FUTURE CHALLENGES As demonstrated, RLE presents numerous challenges that have yet to be answered. In addition to being able to learn all available games, the task of learning games in which reward delay is extreme, such as F-Zero without reward shaping, remains an unsolved challenge. Additionally, some games, such as Super Mario, feature several stages that differ in background and the levels structure. The task of generalizing platform games, as in learning on one stage and being tested on the other, is another unexplored challenge. Likewise surpassing human performance remains a challenge since current state-of-the-art algorithms still struggling with the many SNES games. | 1611.02205#31 | 1611.02205#33 | 1611.02205 | [
"1609.05143"
]
|
1611.02205#33 | Playing SNES in the Retro Learning Environment | # 5 CONCLUSION We introduced a rich environment for evaluating and developing reinforcement learning algorithms which presents signiï¬ cant challenges to current state-of-the-art algorithms. In comparison to other environments RLE provides a large amount of games with access to both the screen and the in- game state. The modular implementation we chose allows extensions of the environment with new consoles and games, thus ensuring the relevance of the environment to RL algorithms for years to come (see Table (2)). Weâ ve encountered several games in which the learning process is highly dependent on the reward deï¬ | 1611.02205#32 | 1611.02205#34 | 1611.02205 | [
"1609.05143"
]
|
1611.02205#34 | Playing SNES in the Retro Learning Environment | nition. This issue can be addressed and explored in RLE as reward deï¬ nition can be done easily. The challenges presented in the RLE consist of: 3D interpretation, delayed reward, noisy background, stochastic AI behavior and more. Although some algorithms were able to play successfully on part of the games, to fully overcome these challenges, an agent must incorporate both technique and strategy. Therefore, we believe, that the RLE is a great platform for future RL research. # 6 ACKNOWLEDGMENTS The authors are grateful to the Signal and Image Processing Lab (SIPL) staff for their support, Alfred Agrell and the LibRetro community for their support and Marc G. Bellemare for his valuable inputs. # REFERENCES M. G. Bellemare, Y. Naddaf, J. Veness, and M. | 1611.02205#33 | 1611.02205#35 | 1611.02205 | [
"1609.05143"
]
|
1611.02205#35 | Playing SNES in the Retro Learning Environment | Bowling. The arcade learning environment: An evaluation platform for general agents. Journal of Artiï¬ cial Intelligence Research, 47:253â 279, jun 2013. M. G. Bellemare, S. Srinivasan, G. Ostrovski, T. Schaul, D. Saxton, and R. Munos. Unifying count- based exploration and intrinsic motivation. arXiv preprint arXiv:1606.01868, 2016. 9 B. Bischoff, D. Nguyen-Tuong, I.-H. Lee, F. Streichert, and A. Knoll. | 1611.02205#34 | 1611.02205#36 | 1611.02205 | [
"1609.05143"
]
|
1611.02205#36 | Playing SNES in the Retro Learning Environment | Hierarchical reinforcement learning for robot navigation. In ESANN, 2013. G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba. Openai gym. arXiv preprint arXiv:1606.01540, 2016. L. Bus¸oniu, R. BabuË ska, and B. | 1611.02205#35 | 1611.02205#37 | 1611.02205 | [
"1609.05143"
]
|
1611.02205#37 | Playing SNES in the Retro Learning Environment | De Schutter. Multi-agent reinforcement learning: An overview. In Innovations in Multi-Agent Systems and Applications-1, pages 183â 221. Springer, 2010. M. Campbell, A. J. Hoane, and F.-h. Hsu. Deep blue. Artiï¬ cial Intelligence, 134(1):57â 83, 2002. R. Crites and A. Barto. Improving elevator performance using reinforcement learning. In Advances in Neural Information Processing Systems 8. | 1611.02205#36 | 1611.02205#38 | 1611.02205 | [
"1609.05143"
]
|
1611.02205#38 | Playing SNES in the Retro Learning Environment | Citeseer, 1996. I. J. Goodfellow, M. Mirza, D. Xiao, A. Courville, and Y. Bengio. An empirical investigation of catastrophic forgetting in gradient-based neural networks. arXiv preprint arXiv:1312.6211, 2013. M. Johnson, K. Hofmann, T. Hutton, and D. Bignell. The malmo platform for artiï¬ cial intelligence experimentation. In International Joint Conference On Artiï¬ cial Intelligence (IJCAI), page 4246, 2016. | 1611.02205#37 | 1611.02205#39 | 1611.02205 | [
"1609.05143"
]
|
1611.02205#39 | Playing SNES in the Retro Learning Environment | libRetro site. Libretro. www.libretro.com. Accessed: 2016-11-03. M. L. Littman. Markov games as a framework for multi-agent reinforcement learning. In Proceed- ings of the eleventh international conference on machine learning, volume 157, pages 157â 163, 1994. M. L. Littman. Value-function reinforcement learning in markov games. Cognitive Systems Re- search, 2(1):55â 66, 2001. M. J. Matari´c. | 1611.02205#38 | 1611.02205#40 | 1611.02205 | [
"1609.05143"
]
|
1611.02205#40 | Playing SNES in the Retro Learning Environment | Reinforcement learning in the multi-robot domain. In Robot colonies, pages 73â 83. Springer, 1997. V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Ried- miller, A. K. Fidjeland, G. Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529â 533, 2015. J. Schaeffer, J. Culberson, N. Treloar, B. Knight, P. Lu, and D. | 1611.02205#39 | 1611.02205#41 | 1611.02205 | [
"1609.05143"
]
|
1611.02205#41 | Playing SNES in the Retro Learning Environment | Szafron. A world championship caliber checkers program. Artiï¬ cial Intelligence, 53(2):273â 289, 1992. S. Shalev-Shwartz, N. Ben-Zrihem, A. Cohen, and A. Shashua. Long-term planning by short-term prediction. arXiv preprint arXiv:1602.01580, 2016. D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, et al. | 1611.02205#40 | 1611.02205#42 | 1611.02205 | [
"1609.05143"
]
|
1611.02205#42 | Playing SNES in the Retro Learning Environment | Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484â 489, 2016. G. Tesauro. Temporal difference learning and td-gammon. Communications of the ACM, 38(3): 58â 68, 1995. J. Togelius, S. Karakovskiy, J. Koutn´ık, and J. Schmidhuber. Super mario evolution. | 1611.02205#41 | 1611.02205#43 | 1611.02205 | [
"1609.05143"
]
|
1611.02205#43 | Playing SNES in the Retro Learning Environment | In 2009 IEEE Symposium on Computational Intelligence and Games, pages 156â 161. IEEE, 2009. Universe. Universe. universe.openai.com, 2016. Accessed: 2016-12-13. H. Van Hasselt, A. Guez, and D. Silver. Deep reinforcement learning with double q-learning. CoRR, abs/1509.06461, 2015. Z. Wang, N. de Freitas, and M. Lanctot. | 1611.02205#42 | 1611.02205#44 | 1611.02205 | [
"1609.05143"
]
|
1611.02205#44 | Playing SNES in the Retro Learning Environment | Dueling network architectures for deep reinforcement learning. arXiv preprint arXiv:1511.06581, 2015. Y. Zhu, R. Mottaghi, E. Kolve, J. J. Lim, A. Gupta, L. Fei-Fei, and A. Farhadi. Target-driven visual navigation in indoor scenes using deep reinforcement learning. arXiv preprint arXiv:1609.05143, 2016. | 1611.02205#43 | 1611.02205#45 | 1611.02205 | [
"1609.05143"
]
|
1611.02205#45 | Playing SNES in the Retro Learning Environment | 10 # Appendices Experimental Results Table 3: Average results of DQN, D-DQN, Dueling D-DQN and a Human player DQN D-DQN Dueling D-DQN Human F-Zero 3116 3636 5161 6298 Gradius III 7583 12343 16929 24440 Mortal Kombat 83733 56200 169300 132441 Super Mario 11765 16946 20030 36386 Wolfenstein 100 83 40 2952 | 1611.02205#44 | 1611.02205#46 | 1611.02205 | [
"1609.05143"
]
|
1611.02205#46 | Playing SNES in the Retro Learning Environment | 11 | 1611.02205#45 | 1611.02205 | [
"1609.05143"
]
|
|
1611.01796#0 | Modular Multitask Reinforcement Learning with Policy Sketches | 7 1 0 2 n u J 7 1 ] G L . s c [ 2 v 6 9 7 1 0 . 1 1 6 1 : v i X r a # Modular Multitask Reinforcement Learning with Policy Sketches # Jacob Andreas 1 Dan Klein 1 Sergey Levine 1 # Abstract We describe a framework for multitask deep re- inforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement themâ | 1611.01796#1 | 1611.01796 | [
"1606.04695"
]
|
|
1611.01796#1 | Modular Multitask Reinforcement Learning with Policy Sketches | speciï¬ cally not providing the detailed guidance used by much previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion sig- nals, or intrinsic motivations). To learn from sketches, we present a model that associates ev- ery subtask with a modular subpolicy, and jointly maximizes reward over full task-speciï¬ c poli- cies by tying parameters across shared subpoli- cies. Optimization is accomplished via a decou- pled actorâ critic training objective that facilitates learning common behaviors from multiple dis- similar reward functions. We evaluate the effec- tiveness of our approach in three environments featuring both discrete and continuous control, and with sparse rewards that can be obtained only after completing a number of high-level sub- goals. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learn- ing task-speciï¬ c or shared policies, while nat- urally inducing a library of interpretable primi- tive behaviors that can be recombined to rapidly adapt to new tasks. | 1611.01796#0 | 1611.01796#2 | 1611.01796 | [
"1606.04695"
]
|
1611.01796#2 | Modular Multitask Reinforcement Learning with Policy Sketches | # 1. Introduction v1: make planks # â ¢: make sticks # Th bi: get wood i ha bi:getwood 2 by: use workbench 12 53: use toolshed Figure 1: Learning from policy sketches. The ï¬ gure shows sim- pliï¬ ed versions of two tasks (make planks and make sticks, each associated with its own policy (Î 1 and Î 2 respectively). These policies share an initial high-level action b1: both require the agent to get wood before taking it to an appropriate crafting sta- tion. Even without prior information about how the associated be- havior Ï 1 should be implemented, knowing that the agent should initially follow the same subpolicy in both tasks is enough to learn a reusable representation of their shared structure. delayed rewards or other long-term structure are often dif- ï¬ cult to solve with ï¬ at, monolithic policies, and a long line of prior work has studied methods for learning hier- archical policy representations (Sutton et al., 1999; Diet- terich, 2000; Konidaris & Barto, 2007; Hauser et al., 2008). While unsupervised discovery of these hierarchies is possi- ble (Daniel et al., 2012; Bacon & Precup, 2015), practical approaches often require detailed supervision in the form of explicitly speciï¬ ed high-level actions, subgoals, or be- havioral primitives (Precup, 2000). These depend on state representations simple or structured enough that suitable reward signals can be effectively engineered by hand. This paper describes a framework for learning compos- able deep subpolicies in a multitask setting, guided only by abstract sketches of high-level behavior. General rein- forcement learning algorithms allow agents to solve tasks in complex environments. But tasks featuring extremely | 1611.01796#1 | 1611.01796#3 | 1611.01796 | [
"1606.04695"
]
|
1611.01796#3 | Modular Multitask Reinforcement Learning with Policy Sketches | 1University of California, Berkeley. Correspondence to: Jacob Andreas <[email protected]>. But is such ï¬ ne-grained supervision actually necessary to achieve the full beneï¬ ts of hierarchy? Speciï¬ cally, is it necessary to explicitly ground high-level actions into the representation of the environment? Or is it sufï¬ cient to simply inform the learner about the abstract structure of policies, without ever specifying how high-level behaviors should make use of primitive percepts or actions? | 1611.01796#2 | 1611.01796#4 | 1611.01796 | [
"1606.04695"
]
|
1611.01796#4 | Modular Multitask Reinforcement Learning with Policy Sketches | Proceedings of the 34 th International Conference on Machine Learning, Sydney, Australia, PMLR 70, 2017. Copyright 2017 by the author(s). To answer these questions, we explore a multitask re- learning setting where the learner is pre- inforcement # Th # [ai 113 Modular Multitask Reinforcement Learning with Policy Sketches sented with policy sketches. Policy sketches are short, un- grounded, symbolic representations of a task that describe its component parts, as illustrated in Figure 1. While sym- bols might be shared across tasks (get wood appears in sketches for both the make planks and make sticks tasks), the learner is told nothing about what these symbols mean, in terms of either observations or intermediate rewards. We present an agent architecture that learns from policy sketches by associating each high-level action with a pa- rameterization of a low-level subpolicy, and jointly op- timizes over concatenated task-speciï¬ c policies by tying parameters across shared subpolicies. | 1611.01796#3 | 1611.01796#5 | 1611.01796 | [
"1606.04695"
]
|
1611.01796#5 | Modular Multitask Reinforcement Learning with Policy Sketches | We ï¬ nd that this architecture can use the high-level guidance provided by sketches, without any grounding or concrete deï¬ nition, to dramatically accelerate learning of complex multi-stage be- haviors. Our experiments indicate that many of the beneï¬ ts to learning that come from highly detailed low-level su- pervision (e.g. from subgoal rewards) can also be obtained from fairly coarse high-level supervision (i.e. from policy sketches). Crucially, sketches are much easier to produce: they require no modiï¬ cations to the environment dynam- ics or reward function, and can be easily provided by non- experts. This makes it possible to extend the beneï¬ ts of hierarchical RL to challenging environments where it may not be possible to specify by hand the details of relevant subtasks. We show that our approach substantially outper- forms purely unsupervised methods that do not provide the learner with any task-speciï¬ c guidance about how hierar- chies should be deployed, and further that the speciï¬ c use of sketches to parameterize modular subpolicies makes bet- ter use of sketches than conditioning on them directly. that are easily recombined. This makes it possible to eval- uate our approach under a variety of different data condi- tions: (1) learning the full collection of tasks jointly via reinforcement, (2) in a zero-shot setting where a policy sketch is available for a held-out task, and (3) in a adapta- tion setting, where sketches are hidden and the agent must learn to adapt a pretrained policy to reuse high-level ac- tions in a new task. In all cases, our approach substantially outperforms previous approaches based on explicit decom- position of the Q function along subtasks (Parr & Russell, 1998; Vogel & Jurafsky, 2010), unsupervised option dis- covery (Bacon & Precup, 2015), and several standard pol- icy gradient baselines. We consider three families of tasks: a 2-D Minecraft- inspired crafting game (Figure 3a), in which the agent must acquire particular resources by ï¬ | 1611.01796#4 | 1611.01796#6 | 1611.01796 | [
"1606.04695"
]
|
1611.01796#6 | Modular Multitask Reinforcement Learning with Policy Sketches | nding raw ingredients, combining them together in the proper order, and in some cases building intermediate tools that enable the agent to al- ter the environment itself; a 2-D maze navigation task that requires the agent to collect keys and open doors, and a 3-D locomotion task (Figure 3b) in which a quadrupedal robot must actuate its joints to traverse a narrow winding cliff. In all tasks, the agent receives a reward only after the ï¬ nal goal is accomplished. For the most challenging tasks, in- volving sequences of four or ï¬ ve high-level actions, a task- speciï¬ c agent initially following a random policy essen- tially never discovers the reward signal, so these tasks can- not be solved without considering their hierarchical struc- ture. | 1611.01796#5 | 1611.01796#7 | 1611.01796 | [
"1606.04695"
]
|
1611.01796#7 | Modular Multitask Reinforcement Learning with Policy Sketches | We have released code at http://github.com/ jacobandreas/psketch. The present work may be viewed as an extension of recent approaches for learning compositional deep architectures from structured program descriptors (Andreas et al., 2016; Reed & de Freitas, 2016). Here we focus on learning in in- teractive environments. This extension presents a variety of technical challenges, requiring analogues of these methods that can be trained from sparse, non-differentiable reward signals without demonstrations of desired system behavior. Our contributions are: A general paradigm for multitask, hierarchical, deep reinforcement learning guided by abstract sketches of task-speciï¬ c policies. A concrete recipe for learning from these sketches, built on a general family of modular deep policy rep- resentations and a multitask actorâ critic training ob- jective. The modular structure of our approach, which associates every high-level action symbol with a discrete subpolicy, naturally induces a library of interpretable policy fragments # 2. Related Work | 1611.01796#6 | 1611.01796#8 | 1611.01796 | [
"1606.04695"
]
|
1611.01796#8 | Modular Multitask Reinforcement Learning with Policy Sketches | The agent representation we describe in this paper be- longs to the broader family of hierarchical reinforcement learners. As detailed in Section 3, our approach may be viewed as an instantiation of the options framework ï¬ rst described by Sutton et al. (1999). A large body of work describes techniques for learning options and related ab- stract actions, in both single- and multitask settings. Most techniques for learning options rely on intermediate su- pervisory signals, e.g. to encourage exploration (Kearns & Singh, 2002) or completion of pre-deï¬ ned subtasks (Kulka- rni et al., 2016). An alternative family of approaches em- ploys post-hoc analysis of demonstrations or pretrained policies to extract reusable sub-components (Stolle & Pre- cup, 2002; Konidaris et al., 2011; Niekum et al., 2015). Techniques for learning options with less guidance than the present work include Bacon & Precup (2015) and Vezhn- evets et al. (2016), and other general hierarchical policy learners include Daniel et al. (2012), Bakker & Schmidhu- ber (2004) and Menache et al. (2002). We will see that the minimal supervision provided by policy sketches re- Modular Multitask Reinforcement Learning with Policy Sketches sults in (sometimes dramatic) improvements over fully un- supervised approaches, while being substantially less oner- ous for humans to provide compared to the grounded su- pervision (such as explicit subgoals or feature abstraction hierarchies) used in previous work. rather than direct supervision. Another closely related fam- ily of models includes neural programmers (Neelakantan et al., 2015) and programmerâ interpreters (Reed & de Fre- itas, 2016), which generate discrete computational struc- tures but require supervision in the form of output actions or full execution traces. Once a collection of high-level actions exists, agents are faced with the problem of learning meta-level (typically semi-Markov) policies that invoke appropriate high-level actions in sequence (Precup, 2000). | 1611.01796#7 | 1611.01796#9 | 1611.01796 | [
"1606.04695"
]
|
1611.01796#9 | Modular Multitask Reinforcement Learning with Policy Sketches | The learning problem we describe in this paper is in some sense the direct dual to the problem of learning these meta-level policies: there, the agent begins with an inventory of complex primitives and must learn to model their behavior and select among them; here we begin knowing the names of appropriate high-level actions but nothing about how they are imple- mented, and must infer implementations (but not, initially, abstract plans) from context. Our model can be combined with these approaches to support a â mixedâ supervision condition where sketches are available for some tasks but not others (Section 4.5). Another closely related line of work is the Hierarchical Abstract Machines (HAM) framework introduced by Parr & Russell (1998). Like our approach, HAMs begin with a representation of a high-level policy as an automaton (or a more general computer program; Andre & Russell, 2001; Marthi et al., 2004) and use reinforcement learn- ing to ï¬ ll in low-level details. Because these approaches attempt to learn a single representation of the Q function for all subtasks and contexts, they require extremely strong formal assumptions about the form of the reward function and state representation (Andre & Russell, 2002) that the present work avoids by decoupling the policy representa- tion from the value function. They perform less effectively when applied to arbitrary state representations where these assumptions do not hold (Section 4.3). We are addition- ally unaware of past work showing that HAM automata can be automatically inferred for new tasks given a pre-trained model, while here we show that it is easy to solve the cor- responding problem for sketch followers (Section 4.5). Our approach is also inspired by a number of recent efforts toward compositional reasoning and interaction with struc- tured deep models. Such models have been previously used for tasks involving question answering (Iyyer et al., 2014; Andreas et al., 2016) and relational reasoning (Socher et al., 2012), and more recently for multi-task, multi-robot trans- fer problems (Devin et al., 2016). In the present workâ as in existing approaches employing dynamically assembled modular networksâ task-speciï¬ c training signals are prop- agated through a collection of composed discrete structures with tied weights. | 1611.01796#8 | 1611.01796#10 | 1611.01796 | [
"1606.04695"
]
|
1611.01796#10 | Modular Multitask Reinforcement Learning with Policy Sketches | Here the composed structures spec- ify time-varying policies rather than feedforward computa- tions, and their parameters must be learned via interaction We view the problem of learning from policy sketches as complementary to the instruction following problem stud- ied in the natural language processing literature. Existing work on instruction following focuses on mapping from natural language strings to symbolic action sequences that are then executed by a hard-coded interpreter (Branavan et al., 2009; Chen & Mooney, 2011; Artzi & Zettlemoyer, 2013; Tellex et al., 2011). Here, by contrast, we focus on learning to execute complex actions given symbolic repre- sentations as a starting point. Instruction following models may be viewed as joint policies over instructions and en- vironment observations (so their behavior is not deï¬ ned in the absence of instructions), while the model described in this paper naturally supports adaptation to tasks where no sketches are available. We expect that future work might combine the two lines of research, bootstrapping policy learning directly from natural language hints rather than the semi-structured sketches used here. | 1611.01796#9 | 1611.01796#11 | 1611.01796 | [
"1606.04695"
]
|
1611.01796#11 | Modular Multitask Reinforcement Learning with Policy Sketches | # 3. Learning Modular Policies from Sketches We consider a multitask reinforcement learning prob- lem arising from a family of infinite-horizon discounted Markov decision processes in a shared environment. This environment is specified by a tuple (S,.A, P, 7), with S a set of states, A a set of low-level actions, P:S x AxSâ R a transition probability distribution, and 7 a discount fac- tor. Each task + â ¬ T is then specified by a pair (R-,p,), with R, : S â R a task-specific reward function and p, : S â | 1611.01796#10 | 1611.01796#12 | 1611.01796 | [
"1606.04695"
]
|
1611.01796#12 | Modular Multitask Reinforcement Learning with Policy Sketches | Ran initial distribution over states. For a fixed sequence {(s;,a;)} of states and actions obtained from a rollout of a given policy, we will denote the empirical return starting in state 5; as qi == 72,4, 7 ~*~" R(s;). In addi- tion to the components of a standard multitask RL problem, we assume that tasks are annotated with sketches K,, each consisting of a sequence (b,1,b;2,...) of high-level sym- bolic labels drawn from a fixed vocabulary B. # B # 3.1. Model We exploit the structural information provided by sketches by constructing for each symbol b a corresponding subpol- icy Ï b. By sharing each subpolicy across all tasks annotated with the corresponding symbol, our approach naturally learns the shared abstraction for the corresponding subtask, without requiring any information about the grounding of that task to be explicitly speciï¬ ed by annotation. Modular Multitask Reinforcement Learning with Policy Sketches Algorithm 1 TRAIN-STEP(Î , curriculum) 1: 2: while 3: 4: 5: 6: 7: 8: // update parameters do 9: for b 10: 11: 12: 13: 14: | 1611.01796#11 | 1611.01796#13 | 1611.01796 | [
"1606.04695"
]
|
1611.01796#13 | Modular Multitask Reinforcement Learning with Policy Sketches | D â â |D| // sample task Ï from curriculum (Section 3.3) Ï // do rollout d = ) · â ¼ { D â D â ª } â ¼ B,r â ¬ T do # â T â B { = 7} # d= //update subpolicy -%+3N, // update critic one Ie + H Da # â D c,(s;)) # log m(ai|si)) (4 (Ver(si)) (gi â | 1611.01796#12 | 1611.01796#14 | 1611.01796 | [
"1606.04695"
]
|
1611.01796#14 | Modular Multitask Reinforcement Learning with Policy Sketches | er â er (8i)) 14s one Ie + H Da (Ver(si)) (gi â er (8i)) â â â Algorithm 2 TRAIN-LOOP() 1: // initialize subpolicies randomly 2: TI = INIT() 3: lmax < 1 5 Tmin <- â 0O 6: / initialize â ¬nax-step curriculum 1 Tl ={râ ¬T:|K,| < imax} 8: curriculum(-) = Unif(7â ) 9: while rmin < Tgooa do 10: // update parameters (Algorithm 11: TRAIN-STEP(II, curriculum) 12: curriculum(r) « I[7 â ¬ 13: Tmin <â Minze7 Er, 14: bmax < â ¬max + 1 Tmin <- â 0O / initialize â ¬nax-step curriculum uniformly Tl ={râ ¬T:|K,| < imax} curriculum(-) = Unif(7â ) while rmin < Tgooa do # imax} | // update parameters (Algorithm 1) TRAIN-STEP(II, curriculum) curriculum(r) « I[7 â ¬ T'|(lLâ Er,) Tmin <â Minze7 Er, < â ¬max + 1 # Ë ErÏ ) « I[7 Minze7 Er, 12: curriculum(r) « I[7 â ¬ T'|(lLâ Er,) WreT # Ï â # â T â # â ¬max + 1 # bmax < # â T | 1611.01796#13 | 1611.01796#15 | 1611.01796 | [
"1606.04695"
]
|
1611.01796#15 | Modular Multitask Reinforcement Learning with Policy Sketches | At each timestep, a subpolicy may select either a low-level action a or a special STOP action. We denote the â A + := augmented state space . At a high } level, this framework is agnostic to the implementation of subpolicies: any function that takes a representation of the current state onto a distribution over over all θb to maximize expected discounted reward JM) = 9° IM) = SOEs an, [D0 Re (s1)] across all tasks Ï . # â T # A In this paper, we focus on the case where each Ï b is rep- resented as a neural network.1 These subpolicies may be viewed as options of the kind described by Sutton et al. (1999), with the key distinction that they have no initiation semantics, but are instead invokable everywhere, and have no explicit representation as a function from an initial state to a distribution over ï¬ nal states (instead implicitly using the STOP action to terminate). # 3.2. Policy Optimization Here that optimization is accomplished via a simple decou- pled actorâ critic method. In a standard policy gradient ap- proach, with a single policy Ï with parameters θ, we com- pute gradient steps of the form (Williams, 1992): VoI(m) = > (Vo log m(ai|si)) (ai _ e(si)), (1) a â Given a ï¬ xed sketch (b1, b2, . . . ), a task-speciï¬ c policy Î Ï is formed by concatenating its associated subpolicies in se- quence. In particular, the high-level policy maintains a sub- policy index i (initially 0), and executes actions from Ï bi until the STOP symbol is emitted, at which point control is passed to Ï bi+1. We may thus think of Î Ï as inducing a , with transitions: Markov chain over the state space | 1611.01796#14 | 1611.01796#16 | 1611.01796 | [
"1606.04695"
]
|
1611.01796#16 | Modular Multitask Reinforcement Learning with Policy Sketches | where the baseline or â criticâ c can be chosen indepen- dently of the future without introducing bias into the gra- dient. Recalling our previous deï¬ nition of qi as the empir- ical return starting from si, this form of the gradient cor- responds to a generalized advantage estimator (Schulman et al., 2015a) with λ = 1. Here c achieves close to the optimal variance (Greensmith et al., 2004) when it is set # S à B aâ AÏ bi(a | s) (s, bi) â (sâ , bi) â (8, bi41) with pr. )),<47,(als) - P(sâ with pr. 7», (STOP|s) 8,@) â | | 1611.01796#15 | 1611.01796#17 | 1611.01796 | [
"1606.04695"
]
|
1611.01796#17 | Modular Multitask Reinforcement Learning with Policy Sketches | Note that II, is semi-Markov with respect to projection of the augmented state space S x B onto the underlying state space S. We denote the complete family of task-specific policies II := {I}, and let each 7, be an arbitrary function of the current environment state parameterized by some weight vector ,. The learning problem is to optimize 1 For ease of presentation, this section assumes that these sub- policy networks are independently parameterized. As described in Section 4.2, it is also possible to share parameters between sub- policies, and introduce discrete subtask structure by way of an embedding of each symbol b. Figure 2: Model overview. Each subpolicy Ï is uniquely associ- ated with a symbol b implemented as a neural network that maps from a state si to distributions over A+, and chooses an action ai by sampling from this distribution. Whenever the STOP action is sampled, control advances to the next subpolicy in the sketch. Modular Multitask Reinforcement Learning with Policy Sketches | 1611.01796#16 | 1611.01796#18 | 1611.01796 | [
"1606.04695"
]
|
1611.01796#18 | Modular Multitask Reinforcement Learning with Policy Sketches | exactly equal to the state-value function VÏ (si) = EÏ qi for the target policy Ï starting in state si. The situation becomes slightly more complicated when generalizing to modular policies built by sequencing sub- policies. In this case, we will have one subpolicy per sym- bol but one critic per task. This is because subpolicies Ï b might participate in a number of composed policies Î Ï , each associated with its own reward function RÏ | 1611.01796#17 | 1611.01796#19 | 1611.01796 | [
"1606.04695"
]
|
1611.01796#19 | Modular Multitask Reinforcement Learning with Policy Sketches | . Thus in- dividual subpolicies are not uniquely identiï¬ ed with value functions, and the aforementioned subpolicy-speciï¬ c state- value estimator is no longer well-deï¬ ned. We extend the actorâ critic method to incorporate the decoupling of poli- cies from value functions by allowing the critic to vary per- sample (that is, per-task-and-timestep) depending on the reward function with which the sample is associated. Not- θb J(Î Ï ), i.e. the sum of ing that t:bâ KÏ â gradients of expected rewards across all tasks in which Ï b participates, we have: Vo (Il) = }> VoJ(I-) = a (Va, log m5(azilSri)) (Gi â er (Sri), (2) where each state-action pair (sÏ i, aÏ i) was selected by the subpolicy Ï b in the context of the task Ï | 1611.01796#18 | 1611.01796#20 | 1611.01796 | [
"1606.04695"
]
|
1611.01796#20 | Modular Multitask Reinforcement Learning with Policy Sketches | . these steps, which is driven by a curriculum learning pro- cedure, is speciï¬ ed in Algorithm 2.) This is an on-policy algorithm. In each step, the agent samples tasks from a task distribution provided by a curriculum (described in the fol- lowing subsection). The current family of policies Î is used to perform rollouts in each sampled task, accumulat- ing the resulting tuples of (states, low-level actions, high- level symbols, rewards, and task identities) into a dataset . D reaches a maximum size D, it is used to compute Once gradients w.r.t. both policy and critic parameters, and the parameter vectors are updated accordingly. The step sizes α and β in Algorithm 1 can be chosen adaptively using any ï¬ rst-order method. # 3.3. | 1611.01796#19 | 1611.01796#21 | 1611.01796 | [
"1606.04695"
]
|
1611.01796#21 | Modular Multitask Reinforcement Learning with Policy Sketches | Curriculum Learning For complex tasks, like the one depicted in Figure 3b, it is difï¬ cult for the agent to discover any states with positive reward until many subpolicy behaviors have already been learned. It is thus a better use of the learnerâ s time to focus on â easyâ tasks, where many rollouts will result in high reward from which appropriate subpolicy behavior can be inferred. But there is a fundamental tradeoff involved here: if the learner spends too much time on easy tasks before being made aware of the existence of harder ones, it may overï¬ t and learn subpolicies that no longer generalize or exhibit the desired structural properties. Now minimization of the gradient variance requires that each cÏ actually depend on the task identity. (This fol- lows immediately by applying the corresponding argument in Greensmith et al. (2004) individually to each term in the sum over Ï in Equation 2.) Because the value function is itself unknown, an approximation must be estimated from data. Here we allow these cÏ to be implemented with an arbitrary function approximator with parameters Î·Ï . This is trained to minimize a squared error criterion, with gradi- ents given by Vn. [5 Lae)? | = » (Vner(si)) (gi â er(si))- GB) Alternative forms of the advantage estimator (e.g. the TD residual RÏ (si)+γVÏ (si+1) VÏ (si) or any other member â | 1611.01796#20 | 1611.01796#22 | 1611.01796 | [
"1606.04695"
]
|
1611.01796#22 | Modular Multitask Reinforcement Learning with Policy Sketches | of the generalized advantage estimator family) can be eas- ily substituted by simply maintaining one such estimator per task. Experiments (Section 4.4) show that condition- ing on both the state and the task identity results in notice- able performance improvements, suggesting that the vari- ance reduction provided by this objective is important for efï¬ cient joint learning of modular policies. To avoid both of these problems, we use a curriculum learn- ing scheme (Bengio et al., 2009) that allows the model to smoothly scale up from easy tasks to more difï¬ cult ones while avoiding overï¬ tting. | 1611.01796#21 | 1611.01796#23 | 1611.01796 | [
"1606.04695"
]
|
1611.01796#23 | Modular Multitask Reinforcement Learning with Policy Sketches | Initially the model is pre- sented with tasks associated with short sketches. Once av- erage reward on all these tasks reaches a certain threshold, the length limit is incremented. We assume that rewards across tasks are normalized with maximum achievable re- ward 0 < qi < 1. Let Ë ErÏ denote the empirical estimate of the expected reward for the current policy on task Ï . Then Ë ErÏ , at each timestep, tasks are sampled in proportion to 1 which by assumption must be positive. Intuitively, the tasks that provide the strongest learning sig- nal are those in which (1) the agent does not on average achieve reward close to the upper bound, but (2) many episodes result in high reward. The expected reward com- ponent of the curriculum addresses condition (1) by en- suring that time is not spent on nearly solved tasks, while the length bound component of the curriculum addresses condition (2) by ensuring that tasks are not attempted until high-reward episodes are likely to be encountered. Experi- ments show that both components of this curriculum learn- ing scheme improve the rate at which the model converges to a good policy (Section 4.4). The complete procedure for computing a single gradient step is given in Algorithm 1. (The outer training loop over The complete curriculum-based training procedure is spec- iï¬ ed in Algorithm 2. Initially, the maximum sketch length Modular Multitask Reinforcement Learning with Policy Sketches Lmax is set to 1, and the curriculum initialized to sample length-1 tasks uniformly. (Neither of the environments we consider in this paper feature any length-1 tasks; in this case, observe that Algorithm 2 will simply advance to length-2 tasks without any parameter updates.) For each setting of f,x, the algorithm uses the current collection of task policies II to compute and apply the gradient step described in Algorithm 1. The rollouts obtained from the call to TRAIN-STEP can also be used to compute reward estimates fr,; these estimates determine a new task distri- bution for the curriculum. The inner loop is repeated un- til the reward threshold rgooq is exceeded, at which point émax 1S incremented and the process repeated over a (now- expanded) collection of tasks. | 1611.01796#22 | 1611.01796#24 | 1611.01796 | [
"1606.04695"
]
|
1611.01796#24 | Modular Multitask Reinforcement Learning with Policy Sketches | # 4. Experiments (a) (b) 7: get gold bi: getwood by: get iron bs: use workbench bs: get gold 7: go to goal bi: north K bp: east bs: east We evaluate the performance of our approach in three envi- ronments: a crafting environment, a maze navigation en- vironment, and a cliff traversal environment. These en- vironments involve various kinds of challenging low-level control: agents must learn to avoid obstacles, interact with various kinds of objects, and relate ï¬ ne-grained joint ac- tivation to high-level locomotion goals. They also feature hierarchical structure: most rewards are provided only af- ter the agent has completed two to ï¬ ve high-level actions in the appropriate sequence, without any intermediate goals to indicate progress towards completion. Figure 3: Examples from the crafting and cliff environments used in this paper. An additional maze environment is also investigated. (a) In the crafting environment, an agent seeking to pick up the gold nugget in the top corner must ï¬ rst collect wood (1) and iron (2), use a workbench to turn them into a bridge (3), and use the (b) In the cliff environment, the bridge to cross the water (4). agent must reach a goal position by traversing a winding sequence of tiles without falling off. | 1611.01796#23 | 1611.01796#25 | 1611.01796 | [
"1606.04695"
]
|
1611.01796#25 | Modular Multitask Reinforcement Learning with Policy Sketches | Control takes place at the level of individual joint angles; high-level behaviors like â move northâ must be learned. # 4.1. Implementation In all our experiments, we implement each subpolicy as a feedforward neural network with ReLU nonlinearities and a hidden layer with 128 hidden units, and each critic as a linear function of the current state. Each subpolicy network receives as input a set of features describing the current state of the environment, and outputs a distribution over actions. The agent acts at every timestep by sampling from this distribution. The gradient steps given in lines 8 and 9 of Algorithm 1 are implemented using RMSPROP (Tiele- man, 2012) with a step size of 0.001 and gradient clipping to a unit norm. We take the batch size D in Algorithm 1 to be 2000, and set γ = 0.9 in both environments. For cur- riculum learning, the improvement threshold rgood is 0.8. # 4.2. Environments The crafting environment (Figure 3a) is inspired by the popular game Minecraft, but is implemented in a discrete 2-D world. The agent may interact with objects in the world by facing them and executing a special USE action. Interacting with raw materials initially scattered around the environment causes them to be added to an inventory. Inter- acting with different crafting stations causes objects in the agentâ s inventory to be combined or transformed. Each task in this game corresponds to some crafted object the agent must produce; the most complicated goals require the agent to also craft intermediate ingredients, and in some cases build tools (like a pickaxe and a bridge) to reach ingredients located in initially inaccessible regions of the environment. The maze environment (not pictured) corresponds closely to the the â | 1611.01796#24 | 1611.01796#26 | 1611.01796 | [
"1606.04695"
]
|
1611.01796#26 | Modular Multitask Reinforcement Learning with Policy Sketches | light worldâ described by Konidaris & Barto (2007). The agent is placed in a discrete world consist- ing of a series of rooms, some of which are connected by doors. Some doors require that the agent ï¬ rst pick up a key to open them. For our experiments, each task corre- sponds to a goal room (always at the same position relative to the agentâ s starting position) that the agent must reach by navigating through a sequence of intermediate rooms. The agent has one sensor on each side of its body, which reports the distance to keys, closed doors, and open doors in the corresponding direction. Sketches specify a particu- lar sequence of directions for the agent to traverse between rooms to reach the goal. The sketch always corresponds to a viable traversal from the start to the goal position, but other (possibly shorter) traversals may also exist. The cliff environment (Figure 3b) is intended to demon- strate the applicability of our approach to problems in- volving high-dimensional continuous control. In this en- vironment, a quadrupedal robot (Schulman et al., 2015b) is placed on a variable-length winding path, and must navi- Modular Multitask Reinforcement Learning with Policy Sketches | 1611.01796#25 | 1611.01796#27 | 1611.01796 | [
"1606.04695"
]
|
1611.01796#27 | Modular Multitask Reinforcement Learning with Policy Sketches | (a) (b) (c) Figure 4: Comparing modular learning from sketches with standard RL baselines. Modular is the approach described in this paper, while Independent learns a separate policy for each task, Joint learns a shared policy that conditions on the task identity, Q automaton learns a single network to map from states and action symbols to Q values, and Optâ Crit is an unsupervised option learner. Performance for the best iteration of the (off-policy) Q automaton is plotted. Performance is shown in (a) the crafting environment, (b) the maze environment, and (c) the cliff environment. The modular approach is eventually able to achieve high reward on all tasks, while the baseline models perform considerably worse on average. gate to the end without falling off. This task is designed to provide a substantially more challenging RL problem, due to the fact that the walker must learn the low-level walk- ing skill before it can make any progress, but has simpler hierarchical structure than the crafting environment. The agent receives a small reward for making progress toward the goal, and a large positive reward for reaching the goal square, with a negative reward for falling off the path. | 1611.01796#26 | 1611.01796#28 | 1611.01796 | [
"1606.04695"
]
|
1611.01796#28 | Modular Multitask Reinforcement Learning with Policy Sketches | A listing of tasks and sketches is given in Appendix A. # 4.3. Multitask Learning The primary experimental question in this paper is whether the extra structure provided by policy sketches alone is enough to enable fast learning of coupled policies across tasks. We aim to explore the differences between the approach described in Section 3 and relevant prior work that performs either unsupervised or weakly supervised multitask learning of hierarchical policy structure. Speciï¬ - cally, we compare our modular to approach to: 1. Structured hierarchical reinforcement learners: The joint and independent models performed best when trained with the same curriculum described in Section 3.3, while the optionâ critic model performed best with a lengthâ weighted curriculum that has access to all tasks from the beginning of training. Learning curves for baselines and the modular model are shown in Figure 4. It can be seen that in all environments, our approach substantially outperforms the baselines: it in- duces policies with substantially higher average reward and converges more quickly than the policy gradient baselines. It can further be seen in Figure 4c that after policies have been learned on simple tasks, the model is able to rapidly adapt to more complex ones, even when the longer tasks involve high-level actions not required for any of the short tasks (Appendix A). Having demonstrated the overall effectiveness of our ap- proach, our remaining experiments explore (1) the impor- tance of various components of the training procedure, and (2) the learned modelsâ ability to generalize or adapt to held-out tasks. For compactness, we restrict our consid- eration on the crafting domain, which features a larger and more diverse range of tasks and high-level actions. (a) the fully unsupervised optionâ critic algorithm of Bacon & Precup (2015) # 4.4. Ablations (b) a Q automaton that attempts to explicitly repre- sent the Q function for each task / subtask com- bination (essentially a HAM (Andre & Russell, 2002) with a deep state abstraction function) In addition to the overall modular parameter-tying structure induced by our sketches, the key components of our train- ing procedure are the decoupled critic and the curriculum. Our next experiments investigate the extent to which these are necessary for good performance. 2. Alternative ways of incorporating sketch data into standard policy gradient methods: | 1611.01796#27 | 1611.01796#29 | 1611.01796 | [
"1606.04695"
]
|
1611.01796#29 | Modular Multitask Reinforcement Learning with Policy Sketches | (c) learning an independent policy for each task (d) learning a joint policy across all tasks, condi- tioning directly on both environment features and a representation of the complete sketch To evaluate the the critic, we consider three ablations: (1) removing the dependence of the model on the environment state, in which case the baseline is a single scalar per task; (2) removing the dependence of the model on the task, in which case the baseline is a conventional generalized ad- vantage estimator; and (3) removing both, in which case Modular Multitask Reinforcement Learning with Policy Sketches | 1611.01796#28 | 1611.01796#30 | 1611.01796 | [
"1606.04695"
]
|
1611.01796#30 | Modular Multitask Reinforcement Learning with Policy Sketches | (a) (b) (c) Figure 5: Training details in the crafting domain. (a) Critics: lines labeled â taskâ include a baseline that varies with task identity, while lines labeled â stateâ include a baseline that varies with state identity. Estimating a baseline that depends on both the represen- tation of the current state and the identity of the current task is better than either alone or a constant baseline. (b) Curricula: lines labeled â lenâ use a curriculum with iteratively increasing sketch lengths, while lines labeled â wgtâ sample tasks in inverse propor- tion to their current reward. Adjusting the sampling distribution based on both task length and performance return improves con- vergence. (c) Individual task performance. Colors correspond to task length. Sharp steps in the learning curve correspond to in- creases of émax in the curriculum. the baseline is a single scalar, as in a vanilla policy gradient approach. Results are shown in Figure 5a. Introducing both state and task dependence into the baseline leads to faster convergence of the model: the approach with a constant baseline achieves less than half the overall performance of the full critic after 3 million episodes. Introducing task and state dependence independently improve this performance; combining them gives the best result. | 1611.01796#29 | 1611.01796#31 | 1611.01796 | [
"1606.04695"
]
|
1611.01796#31 | Modular Multitask Reinforcement Learning with Policy Sketches | # wet) Model Multitask 0-shot Adaptation Joint Independent Optionâ Critic Modular (ours) .49 .44 .47 .89 .01 â â .77 â .01 .42 .76 Table 1: Accuracy and generalization of learned models in the crafting domain. The table shows the task completion rate for each approach after convergence under various training condi- tions. Multitask is the multitask training condition described in Section 4.3, while 0-Shot and Adaptation are the generalization experiments described in Section 4.5. Our modular approach con- sistently achieves the best performance. We hold out two length-four tasks from the full inventory used in Section 4.3, and train on the remaining tasks. For zero-shot experiments, we simply form the concatenated policy described by the sketches of the held-out tasks, and repeatedly execute this policy (without learning) in order to obtain an estimate of its effectiveness. For adaptation ex- periments, we consider ordinary RL over high-level actions , implementing the high- B level learner with the same agent architecture as described in Section 3.1. | 1611.01796#30 | 1611.01796#32 | 1611.01796 | [
"1606.04695"
]
|
1611.01796#32 | Modular Multitask Reinforcement Learning with Policy Sketches | Note that the Independent and Optionâ Critic models cannot be applied to the zero-shot evaluation, while the Joint model cannot be applied to the adaptation baseline (because it depends on pre-speciï¬ ed sketch fea- tures). Results are shown in Table 1. The held-out tasks are sufï¬ ciently challenging that the baselines are unable to obtain more than negligible reward: in particular, the joint model overï¬ ts to the training tasks and cannot generalize to new sketches, while the independent model cannot discover enough of a reward signal to learn in the adaptation setting. The modular model does comparatively well: individual subpolicies succeed in novel zero-shot conï¬ gurations (sug- gesting that they have in fact discovered the behavior sug- gested by the semantics of the sketch) and provide a suit- able basis for adaptive discovery of new high-level policies. We also investigate two aspects of our curriculum learning scheme: starting with short examples and moving to long ones, and sampling tasks in inverse proportion to their ac- cumulated reward. Experiments are shown in Figure 5b. Both components help; prioritization by both length and weight gives the best results. # 4.5. | 1611.01796#31 | 1611.01796#33 | 1611.01796 | [
"1606.04695"
]
|
1611.01796#33 | Modular Multitask Reinforcement Learning with Policy Sketches | Zero-shot and Adaptation Learning In our ï¬ nal experiments, we consider the modelâ s ability to generalize beyond the standard training condition. We ï¬ rst consider two tests of generalization: a zero-shot setting, in which the model is provided a sketch for the new task and must immediately achieve good performance, and a adap- tation setting, in which no sketch is provided and the model must learn the form of a suitable sketch via interaction in the new task. # 5. Conclusions | 1611.01796#32 | 1611.01796#34 | 1611.01796 | [
"1606.04695"
]
|
1611.01796#34 | Modular Multitask Reinforcement Learning with Policy Sketches | We have described an approach for multitask learning of deep multitask policies guided by symbolic policy sketches. By associating each symbol appearing in a sketch with a modular neural subpolicy, we have shown that it is possible to build agents that share behavior across tasks in order to achieve success in tasks with sparse and delayed rewards. This process induces an inventory of reusable and interpretable subpolicies which can be employed for zero- shot generalization when further sketches are available, and hierarchical reinforcement learning when they are not. Our work suggests that these sketches, which are easy to pro- duce and require no grounding in the environment, provide an effective scaffold for learning hierarchical policies from minimal supervision. Modular Multitask Reinforcement Learning with Policy Sketches | 1611.01796#33 | 1611.01796#35 | 1611.01796 | [
"1606.04695"
]
|
1611.01796#35 | Modular Multitask Reinforcement Learning with Policy Sketches | # Acknowledgments JA is supported by a Facebook Graduate Fellowship and a Berkeley AI / Huawei Fellowship. Devin, Coline, Gupta, Abhishek, Darrell, Trevor, Abbeel, Pieter, and Levine, Sergey. Learning modular neural network policies for multi-task and multi-robot transfer. arXiv preprint arXiv:1609.07088, 2016. # References Andre, David and Russell, Stuart. Programmable reinforce- ment learning agents. In Advances in Neural Information Processing Systems, 2001. Andre, David and Russell, Stuart. State abstraction for pro- grammable reinforcement learning agents. | 1611.01796#34 | 1611.01796#36 | 1611.01796 | [
"1606.04695"
]
|
1611.01796#36 | Modular Multitask Reinforcement Learning with Policy Sketches | In Proceed- ings of the Meeting of the Association for the Advance- ment of Artiï¬ cial Intelligence, 2002. Andreas, Jacob, Rohrbach, Marcus, Darrell, Trevor, and Klein, Dan. Learning to compose neural networks for question answering. In Proceedings of the Annual Meet- ing of the North American Chapter of the Association for Computational Linguistics, 2016. Artzi, Yoav and Zettlemoyer, Luke. Weakly supervised learning of semantic parsers for mapping instructions to actions. Transactions of the Association for Computa- tional Linguistics, 1(1):49â | 1611.01796#35 | 1611.01796#37 | 1611.01796 | [
"1606.04695"
]
|
1611.01796#37 | Modular Multitask Reinforcement Learning with Policy Sketches | 62, 2013. Dietterich, Thomas G. Hierarchical reinforcement learning with the maxq value function decomposition. J. Artif. Intell. Res. (JAIR), 13:227â 303, 2000. Greensmith, Evan, Bartlett, Peter L, and Baxter, Jonathan. Variance reduction techniques for gradient estimates in reinforcement learning. Journal of Machine Learning Research, 5(Nov):1471â 1530, 2004. | 1611.01796#36 | 1611.01796#38 | 1611.01796 | [
"1606.04695"
]
|
1611.01796#38 | Modular Multitask Reinforcement Learning with Policy Sketches | Hauser, Kris, Bretl, Timothy, Harada, Kensuke, and Latombe, Jean-Claude. Using motion primitives in prob- abilistic sample-based planning for humanoid robots. In Algorithmic foundation of robotics, pp. 507â 522. Springer, 2008. Iyyer, Mohit, Boyd-Graber, Jordan, Claudino, Leonardo, Socher, Richard, and Daum´e III, Hal. A neural net- work for factoid question answering over paragraphs. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2014. Bacon, Pierre-Luc and Precup, Doina. The option-critic ar- chitecture. In NIPS Deep Reinforcement Learning Work- shop, 2015. Kearns, Michael and Singh, Satinder. Near-optimal rein- forcement learning in polynomial time. Machine Learn- ing, 49(2-3):209â 232, 2002. Bakker, Bram and Schmidhuber, J¨urgen. | 1611.01796#37 | 1611.01796#39 | 1611.01796 | [
"1606.04695"
]
|
1611.01796#39 | Modular Multitask Reinforcement Learning with Policy Sketches | Hierarchical rein- forcement learning based on subgoal discovery and sub- policy specialization. In Proc. of the 8-th Conf. on Intel- ligent Autonomous Systems, pp. 438â 445, 2004. Bengio, Yoshua, Louradour, J´erË ome, Collobert, Ronan, and In International Weston, Jason. Curriculum learning. Conference on Machine Learning, pp. 41â 48. ACM, 2009. Branavan, S.R.K., Chen, Harr, Zettlemoyer, Luke S., and Barzilay, Regina. Reinforcement learning for mapping In Proceedings of the Annual instructions to actions. Meeting of the Association for Computational Linguis- tics, pp. 82â 90. Association for Computational Linguis- tics, 2009. Chen, David L. and Mooney, Raymond J. Learning to inter- pret natural language navigation instructions from obser- vations. In Proceedings of the Meeting of the Association for the Advancement of Artiï¬ cial Intelligence, volume 2, pp. 1â 2, 2011. Konidaris, George and Barto, Andrew G. | 1611.01796#38 | 1611.01796#40 | 1611.01796 | [
"1606.04695"
]
|
1611.01796#40 | Modular Multitask Reinforcement Learning with Policy Sketches | Building portable options: Skill transfer in reinforcement learning. In IJ- CAI, volume 7, pp. 895â 900, 2007. Konidaris, George, Kuindersma, Scott, Grupen, Roderic, and Barto, Andrew. Robot learning from demonstration by constructing skill trees. The International Journal of Robotics Research, pp. 0278364911428653, 2011. Kulkarni, Tejas D, Narasimhan, Karthik R, Saeedi, Arda- van, and Tenenbaum, Joshua B. Hierarchical deep rein- forcement learning: Integrating temporal abstraction and intrinsic motivation. arXiv preprint arXiv:1604.06057, 2016. | 1611.01796#39 | 1611.01796#41 | 1611.01796 | [
"1606.04695"
]
|
1611.01796#41 | Modular Multitask Reinforcement Learning with Policy Sketches | Marthi, Bhaskara, Lantham, David, Guestrin, Carlos, and Russell, Stuart. Concurrent hierarchical reinforcement learning. In Proceedings of the Meeting of the Associa- tion for the Advancement of Artiï¬ cial Intelligence, 2004. Menache, Ishai, Mannor, Shie, and Shimkin, Nahum. Q-cutdynamic discovery of sub-goals in reinforcement In European Conference on Machine Learn- learning. ing, pp. 295â 306. Springer, 2002. | 1611.01796#40 | 1611.01796#42 | 1611.01796 | [
"1606.04695"
]
|
1611.01796#42 | Modular Multitask Reinforcement Learning with Policy Sketches | Daniel, Christian, Neumann, Gerhard, and Peters, Jan. Hi- erarchical relative entropy policy search. In Proceedings of the International Conference on Artiï¬ cial Intelligence and Statistics, pp. 273â 281, 2012. Neelakantan, Arvind, Le, Quoc V, and Sutskever, Ilya. Neural programmer: Inducing latent programs with gra- dient descent. arXiv preprint arXiv:1511.04834, 2015. Modular Multitask Reinforcement Learning with Policy Sketches | 1611.01796#41 | 1611.01796#43 | 1611.01796 | [
"1606.04695"
]
|
1611.01796#43 | Modular Multitask Reinforcement Learning with Policy Sketches | Niekum, Scott, Osentoski, Sarah, Konidaris, George, Chitta, Sachin, Marthi, Bhaskara, and Barto, Andrew G. Learning grounded ï¬ nite-state representations from un- structured demonstrations. The International Journal of Robotics Research, 34(2):131â 157, 2015. Vogel, Adam and Jurafsky, Dan. Learning to follow navi- gational directions. In Proceedings of the Annual Meet- ing of the Association for Computational Linguistics, pp. 806â 814. Association for Computational Linguistics, 2010. | 1611.01796#42 | 1611.01796#44 | 1611.01796 | [
"1606.04695"
]
|
1611.01796#44 | Modular Multitask Reinforcement Learning with Policy Sketches | Parr, Ron and Russell, Stuart. Reinforcement learning with hierarchies of machines. In Advances in Neural Infor- mation Processing Systems, 1998. Williams, Ronald J. Simple statistical gradient-following learning. algorithms for connectionist reinforcement Machine learning, 8(3-4):229â 256, 1992. Precup, Doina. Temporal abstraction in reinforcement learning. PhD thesis, 2000. Reed, Scott and de Freitas, Nando. Neural programmer- interpreters. Proceedings of the International Confer- ence on Learning Representations, 2016. Schulman, John, Moritz, Philipp, Levine, Sergey, Jordan, Michael, and Abbeel, Pieter. High-dimensional con- tinuous control using generalized advantage estimation. arXiv preprint arXiv:1506.02438, 2015a. Schulman, John, Moritz, Philipp, Levine, Sergey, Jordan, Michael, and Abbeel, Pieter. Trust region policy op- In International Conference on Machine timization. Learning, 2015b. | 1611.01796#43 | 1611.01796#45 | 1611.01796 | [
"1606.04695"
]
|
1611.01796#45 | Modular Multitask Reinforcement Learning with Policy Sketches | Socher, Richard, Huval, Brody, Manning, Christopher, and Ng, Andrew. Semantic compositionality through recur- sive matrix-vector spaces. In Proceedings of the Confer- ence on Empirical Methods in Natural Language Pro- cessing, pp. 1201â 1211, Jeju, Korea, 2012. Stolle, Martin and Precup, Doina. Learning options in rein- forcement learning. In International Symposium on Ab- straction, Reformulation, and Approximation, pp. 212â 223. Springer, 2002. Sutton, Richard S, Precup, Doina, and Singh, Satinder. | 1611.01796#44 | 1611.01796#46 | 1611.01796 | [
"1606.04695"
]
|
1611.01796#46 | Modular Multitask Reinforcement Learning with Policy Sketches | Be- tween MDPs and semi-MDPs: A framework for tempo- ral abstraction in reinforcement learning. Artiï¬ cial intel- ligence, 112(1):181â 211, 1999. Tellex, Stefanie, Kollar, Thomas, Dickerson, Steven, Wal- ter, Matthew R., Banerjee, Ashis Gopal, Teller, Seth, and Roy, Nicholas. Understanding natural language com- mands for robotic navigation and mobile manipulation. | 1611.01796#45 | 1611.01796#47 | 1611.01796 | [
"1606.04695"
]
|
1611.01796#47 | Modular Multitask Reinforcement Learning with Policy Sketches | In In Proceedings of the National Conference on Artiï¬ - cial Intelligence, 2011. Tieleman, Tijmen. RMSProp (unpublished), 2012. Vezhnevets, Alexander, Mnih, Volodymyr, Agapiou, John, Osindero, Simon, Graves, Alex, Vinyals, Oriol, and Kavukcuoglu, Koray. Strategic attentive writer for learn- ing macro-actions. arXiv preprint arXiv:1606.04695, 2016. Modular Multitask Reinforcement Learning with Policy Sketches | 1611.01796#46 | 1611.01796#48 | 1611.01796 | [
"1606.04695"
]
|
1611.01796#48 | Modular Multitask Reinforcement Learning with Policy Sketches | A. Tasks and Sketches The complete list of tasks, sketches, and symbols is given below. Tasks marked with an asteriskâ are held out for the generalization experiments described in Section 4.5, but included in the multitask training experiments in Sections 4.3 and 4.4. Goal Sketch Crafting environment make plank make stick make cloth make rope make bridge make bedâ make axeâ make shears get gold get gem get wood get wood get grass get grass get iron get wood get wood get wood get iron get wood use toolshed use workbench use factory use toolshed get wood use toolshed use workbench use workbench get wood use workbench use factory get grass get iron get iron use factory get iron use workbench use toolshed use workbench use bridge use toolshed use axe # Maze environment room 1 room 2 room 3 room 4 room 5 room 6 room 7 room 8 room 9 room 10 left left right up up up down left right left left down down left right right right left down up # up up down down right | 1611.01796#47 | 1611.01796#49 | 1611.01796 | [
"1606.04695"
]
|
1611.01796#49 | Modular Multitask Reinforcement Learning with Policy Sketches | # Cliff environment path 0 path 1 path 2 path 3 path 4 path 5 path 6 path 7 path 8 path 9 path 10 path 11 path 12 path 13 path 14 path 15 path 16 path 17 path 18 path 19 path 20 path 21 path 22 path 23 north east south west west west north west east north east south south south south east east east north west north north west south south north east north south west north east west south south south east north east west north west west east north north west east west south south south | 1611.01796#48 | 1611.01796 | [
"1606.04695"
]
|
|
1611.01576#0 | Quasi-Recurrent Neural Networks | 6 1 0 2 v o N 1 2 ] E N . s c [ 2 v 6 7 5 1 0 . 1 1 6 1 : v i X r a # Under review as a conference paper at ICLR 2017 # QUASI-RECURRENT NEURAL NETWORKS James Bradburyâ , Stephen Merityâ , Caiming Xiong & Richard Socher Salesforce Research Palo Alto, California {james.bradbury,smerity,cxiong,rsocher}@salesforce.com # ABSTRACT | 1611.01576#1 | 1611.01576 | [
"1605.07725"
]
|
|
1611.01576#1 | Quasi-Recurrent Neural Networks | Recurrent neural networks are a powerful tool for modeling sequential data, but the dependence of each timestepâ s computation on the previous timestepâ s out- put limits parallelism and makes RNNs unwieldy for very long sequences. We introduce quasi-recurrent neural networks (QRNNs), an approach to neural se- quence modeling that alternates convolutional layers, which apply in parallel across timesteps, and a minimalist recurrent pooling function that applies in par- allel across channels. Despite lacking trainable recurrent layers, stacked QRNNs have better predictive accuracy than stacked LSTMs of the same hidden size. Due to their increased parallelism, they are up to 16 times faster at train and test time. Experiments on language modeling, sentiment classiï¬ cation, and character-level neural machine translation demonstrate these advantages and underline the viabil- ity of QRNNs as a basic building block for a variety of sequence tasks. # INTRODUCTION Recurrent neural networks (RNNs), including gated variants such as the long short-term memory (LSTM) (Hochreiter & Schmidhuber, 1997) have become the standard model architecture for deep learning approaches to sequence modeling tasks. RNNs repeatedly apply a function with trainable parameters to a hidden state. Recurrent layers can also be stacked, increasing network depth, repre- sentational power and often accuracy. RNN applications in the natural language domain range from sentence classiï¬ cation (Wang et al., 2015) to word- and character-level language modeling (Zaremba et al., 2014). RNNs are also commonly the basic building block for more complex models for tasks such as machine translation (Bahdanau et al., 2015; Luong et al., 2015; Bradbury & Socher, 2016) or question answering (Kumar et al., 2016; Xiong et al., 2016). Unfortunately standard RNNs, in- cluding LSTMs, are limited in their capability to handle tasks involving very long sequences, such as document classiï¬ cation or character-level machine translation, as the computation of features or states for different parts of the document cannot occur in parallel. | 1611.01576#0 | 1611.01576#2 | 1611.01576 | [
"1605.07725"
]
|
1611.01576#2 | Quasi-Recurrent Neural Networks | Convolutional neural networks (CNNs) (Krizhevsky et al., 2012), though more popular on tasks in- volving image data, have also been applied to sequence encoding tasks (Zhang et al., 2015). Such models apply time-invariant ï¬ lter functions in parallel to windows along the input sequence. CNNs possess several advantages over recurrent models, including increased parallelism and better scal- ing to long sequences such as those often seen with character-level language data. Convolutional models for sequence processing have been more successful when combined with RNN layers in a hybrid architecture (Lee et al., 2016), because traditional max- and average-pooling approaches to combining convolutional features across timesteps assume time invariance and hence cannot make full use of large-scale sequence order information. We present quasi-recurrent neural networks for neural sequence modeling. QRNNs address both drawbacks of standard models: like CNNs, QRNNs allow for parallel computation across both timestep and minibatch dimensions, enabling high throughput and good scaling to long sequences. Like RNNs, QRNNs allow the output to depend on the overall order of elements in the sequence. We describe QRNN variants tailored to several natural language tasks, including document-level sentiment classiï¬ cation, language modeling, and character-level machine translation. These models outperform strong LSTM baselines on all three tasks while dramatically reducing computation time. | 1611.01576#1 | 1611.01576#3 | 1611.01576 | [
"1605.07725"
]
|
1611.01576#3 | Quasi-Recurrent Neural Networks | # â Equal contribution 1 # Under review as a conference paper at ICLR 2017 LSTM CNN QRNN 4 Lrcor TT conciuton TTT â convixicn iain LSTM/Linear â +{_}+{-_}--L }- Max-Pool fo-Pol [ELS = === >| 7 Linear _ Convolution Convolution -____ LSTM/Linear â -{_ ++} HL }- Max-Pool iw fo-Pol §=$[2_ = > t Â¥ t iu iu iu Figure 1: Block diagrams showing the computation structure of the QRNN compared with typical LSTM and CNN architectures. Red signiï¬ es convolutions or matrix multiplications; a continuous block means that those computations can proceed in parallel. Blue signiï¬ es parameterless functions that operate in parallel along the channel/feature dimension. LSTMs can be factored into (red) linear blocks and (blue) elementwise blocks, but computation at each timestep still depends on the results from the previous timestep. # 2 MODEL Each layer of a quasi-recurrent neural network consists of two kinds of subcomponents, analogous to convolution and pooling layers in CNNs. The convolutional component, like convolutional layers in CNNs, allows fully parallel computation across both minibatches and spatial dimensions, in this case the sequence dimension. The pooling component, like pooling layers in CNNs, lacks trainable parameters and allows fully parallel computation across minibatch and feature dimensions. Given an input sequence X â RT à n of T n-dimensional vectors x1 . . . xT , the convolutional sub- component of a QRNN performs convolutions in the timestep dimension with a bank of m ï¬ lters, producing a sequence Z â RT à m of m-dimensional candidate vectors zt. In order to be useful for tasks that include prediction of the next token, the ï¬ lters must not allow the computation for any given timestep to access information from future timesteps. That is, with ï¬ lters of width k, each zt depends only on xtâ k+1 through xt. This concept, known as a masked convolution (van den Oord et al., 2016), is implemented by padding the input to the left by the convolutionâ s ï¬ lter size minus one. We apply additional convolutions with separate ï¬ | 1611.01576#2 | 1611.01576#4 | 1611.01576 | [
"1605.07725"
]
|
1611.01576#4 | Quasi-Recurrent Neural Networks | lter banks to obtain sequences of vectors for the elementwise gates that are needed for the pooling function. While the candidate vectors are passed through a tanh nonlinearity, the gates use an elementwise sigmoid. If the pooling function requires a forget gate ft and an output gate ot at each timestep, the full set of computations in the convolutional component is then: Z = tanh(Wz â X) F = Ï (Wf â X) O = Ï (Wo â X), (1) where Wz,Wf , and Wo, each in Rkà nà m, are the convolutional ï¬ lter banks and â denotes a masked convolution along the timestep dimension. Note that if the ï¬ lter width is 2, these equations reduce to the LSTM-like zt = tanh(W1 ft = Ï (W1 ot = Ï (W1 zxtâ 1 + W2 f xt) oxt). zxt) f xtâ 1 + W2 oxtâ 1 + W2 (2) Convolution ï¬ lters of larger width effectively compute higher n-gram features at each timestep; thus larger widths are especially important for character-level tasks. Suitable functions for the pooling subcomponent can be constructed from the familiar elementwise gates of the traditional LSTM cell. We seek a function controlled by gates that can mix states across timesteps, but which acts independently on each channel of the state vector. The simplest option, which Balduzzi & Ghifary (2016) term â dynamic average poolingâ , uses only a forget gate: hy = f © by-1 + (1 â f:) O x, (3) 2 | 1611.01576#3 | 1611.01576#5 | 1611.01576 | [
"1605.07725"
]
|
1611.01576#5 | Quasi-Recurrent Neural Networks | # Under review as a conference paper at ICLR 2017 where © denotes elementwise multiplication. The function may also include an output gate: ec, =f OG_1 + (1-f;) Ou _ (4) hy = 0; ©. Or the recurrence relation may include an independent input and forget gate: ce =f0q-1. +h) On (5) hy = 0; © Cy. We term these three options f -pooling, fo-pooling, and ifo-pooling respectively; in each case we initialize h or c to zero. Although the recurrent parts of these functions must be calculated for each timestep in sequence, their simplicity and parallelism along feature dimensions means that, in practice, evaluating them over even long sequences requires a negligible amount of computation time. A single QRNN layer thus performs an input-dependent pooling, followed by a gated linear combi- nation of convolutional features. As with convolutional neural networks, two or more QRNN layers should be stacked to create a model with the capacity to approximate more complex functions. | 1611.01576#4 | 1611.01576#6 | 1611.01576 | [
"1605.07725"
]
|
1611.01576#6 | Quasi-Recurrent Neural Networks | 2.1 VARIANTS Motivated by several common natural language tasks, and the long history of work on related ar- chitectures, we introduce several extensions to the stacked QRNN described above. Notably, many extensions to both recurrent and convolutional models can be applied directly to the QRNN as it combines elements of both model types. Regularization An important extension to the stacked QRNN is a robust regularization scheme inspired by recent work in regularizing LSTMs. The need for an effective regularization method for LSTMs, and dropoutâ | 1611.01576#5 | 1611.01576#7 | 1611.01576 | [
"1605.07725"
]
|
1611.01576#7 | Quasi-Recurrent Neural Networks | s relative lack of efï¬ cacy when applied to recurrent connections, led to the development of recurrent dropout schemes, in- cluding variational inferenceâ based dropout (Gal & Ghahramani, 2016) and zoneout (Krueger et al., 2016). These schemes extend dropout to the recurrent setting by taking advantage of the repeating structure of recurrent networks, providing more powerful and less destructive regularization. Variational inferenceâ based dropout locks the dropout mask used for the recurrent connections across timesteps, so a single RNN pass uses a single stochastic subset of the recurrent weights. Zoneout stochastically chooses a new subset of channels to â zone outâ at each timestep; for these channels the network copies states from one timestep to the next without modiï¬ cation. | 1611.01576#6 | 1611.01576#8 | 1611.01576 | [
"1605.07725"
]
|
1611.01576#8 | Quasi-Recurrent Neural Networks | As QRNNs lack recurrent weights, the variational inference approach does not apply. Thus we extended zoneout to the QRNN architecture by modifying the pooling function to keep the previous pooling state for a stochastic subset of channels. Conveniently, this is equivalent to stochastically setting a subset of the QRNNâ s f gate channels to 1, or applying dropout on 1 â f : F = 1 â dropout(1 â Ï (Wf â X))) (6) Thus the pooling function itself need not be modiï¬ ed at all. We note that when using an off-the- shelf dropout layer in this context, it is important to remove automatic rescaling functionality from the implementation if it is present. In many experiments, we also apply ordinary dropout between layers, including between word embeddings and the ï¬ rst QRNN layer. Densely-Connected Layers We can also extend the QRNN architecture using techniques intro- duced for convolutional networks. | 1611.01576#7 | 1611.01576#9 | 1611.01576 | [
"1605.07725"
]
|
1611.01576#9 | Quasi-Recurrent Neural Networks | For sequence classiï¬ cation tasks, we found it helpful to use skip-connections between every QRNN layer, a technique termed â dense convolutionâ by Huang et al. (2016). Where traditional feed-forward or convolutional networks have connections only be- tween subsequent layers, a â DenseNetâ with L layers has feed-forward or convolutional connections between every pair of layers, for a total of L(Lâ 1). This can improve gradient ï¬ ow and convergence properties, especially in deeper networks, although it requires a parameter count that is quadratic in the number of layers. When applying this technique to the QRNN, we include connections between the input embeddings and every QRNN layer and between every pair of QRNN layers. This is equivalent to concatenating | 1611.01576#8 | 1611.01576#10 | 1611.01576 | [
"1605.07725"
]
|
1611.01576#10 | Quasi-Recurrent Neural Networks | 3 # Under review as a conference paper at ICLR 2017 Convolution oe p SS Linear fo-Poolh =~: â = ~~ â â SY Sara >| â fo-Pool Convolution ans , Salsas Linear fo-Poolh =[â -â = â â â â â > f-Pool Attention Linear Output gates y MU y Figure 2: The QRNN encoderâ decoder architecture used for machine translation experiments. each QRNN layerâ s input to its output along the channel dimension before feeding the state into the next layer. The output of the last layer alone is then used as the overall encoding result. Encoderâ Decoder Models To demonstrate the generality of QRNNs, we extend the model architec- ture to sequence-to-sequence tasks, such as machine translation, by using a QRNN as encoder and a modiï¬ ed QRNN, enhanced with attention, as decoder. The motivation for modifying the decoder is that simply feeding the last encoder hidden state (the output of the encoderâ s pooling layer) into the decoderâ s recurrent pooling layer, analogously to conventional recurrent encoderâ decoder architec- tures, would not allow the encoder state to affect the gate or update values that are provided to the decoderâ s pooling layer. This would substantially limit the representational power of the decoder. Instead, the output of each decoder QRNN layerâ s convolution functions is supplemented at every timestep with the final encoder hidden state. This is accomplished by adding the result of the convo- lution for layer ¢ (e.g., Wé « X°, in R?*â ¢) with broadcasting to a linearly projected copy of layer £â s last encoder state (e.g., veins ,in Râ ¢): Z = tanh(WS « X°+ Vhs) Ff = o(Wi xXâ + Vih*) (7) Of = o (WS « Xo + VohS), where the tilde denotes that Ë h is an encoder variable. Encoderâ decoder models which operate on long sequences are made signiï¬ cantly more powerful with the addition of soft attention (Bahdanau et al., 2015), which removes the need for the entire input representation to ï¬ t into a ï¬ xed-length encoding vector. In our experiments, we computed an attentional sum of the encoderâ | 1611.01576#9 | 1611.01576#11 | 1611.01576 | [
"1605.07725"
]
|
1611.01576#11 | Quasi-Recurrent Neural Networks | s last layerâ s hidden states. We used the dot products of these encoder hidden states with the decoderâ s last layerâ s un-gated hidden states, applying a softmax along the encoder timesteps, to weight the encoder states into an attentional sum kt for each decoder timestep. This context, and the decoder state, are then fed into a linear layer followed by the output gate: st = softmax(ch - hâ ) all's k, = > athâ (8) h? = 0, © (W,k, + W.c?), where L is the last layer. While the ï¬ rst step of this attention procedure is quadratic in the sequence length, in practice it takes signiï¬ cantly less computation time than the modelâ s linear and convolutional layers due to the simple and highly parallel dot-product scoring function. | 1611.01576#10 | 1611.01576#12 | 1611.01576 | [
"1605.07725"
]
|
1611.01576#12 | Quasi-Recurrent Neural Networks | 4 # Under review as a conference paper at ICLR 2017 Model Time / Epoch (s) Test Acc (%) NBSVM-bi (Wang & Manning, 2012) 2 layer sequential BoW CNN (Johnson & Zhang, 2014) Ensemble of RNNs and NB-SVM (Mesnil et al., 2014) 2-layer LSTM (Longpre et al., 2016) Residual 2-layer bi-LSTM (Longpre et al., 2016) â | 1611.01576#11 | 1611.01576#13 | 1611.01576 | [
"1605.07725"
]
|
1611.01576#13 | Quasi-Recurrent Neural Networks | â â â â 91.2 92.3 92.6 87.6 90.1 Our models Densely-connected 4-layer LSTM (cuDNN optimized) Densely-connected 4-layer QRNN Densely-connected 4-layer QRNN with k = 4 480 150 160 90.9 91.4 91.1 Table 1: Accuracy comparison on the IMDb binary sentiment classiï¬ cation task. All of our models use 256 units per layer; all layers other than the ï¬ rst layer, whose ï¬ lter width may vary, use ï¬ lter width k = 2. Train times are reported on a single NVIDIA K40 GPU. | 1611.01576#12 | 1611.01576#14 | 1611.01576 | [
"1605.07725"
]
|
1611.01576#14 | Quasi-Recurrent Neural Networks | We exclude semi-supervised models that conduct additional training on the unlabeled portion of the dataset. # 3 EXPERIMENTS We evaluate the performance of the QRNN on three different natural language tasks: document-level sentiment classiï¬ cation, language modeling, and character-based neural machine translation. Our QRNN models outperform LSTM-based models of equal hidden size on all three tasks while dra- matically improving computation speed. Experiments were implemented in Chainer (Tokui et al.). 3.1 SENTIMENT CLASSIFICATION We evaluate the QRNN architecture on a popular document-level sentiment classiï¬ cation bench- mark, the IMDb movie review dataset (Maas et al., 2011). The dataset consists of a balanced sample of 25,000 positive and 25,000 negative reviews, divided into equal-size train and test sets, with an average document length of 231 words (Wang & Manning, 2012). We compare only to other results that do not make use of additional unlabeled data (thus excluding e.g., Miyato et al. (2016)). Our best performance on a held-out development set was achieved using a four-layer densely- connected QRNN with 256 units per layer and word vectors initialized using 300-dimensional cased GloVe embeddings (Pennington et al., 2014). Dropout of 0.3 was applied between layers, and we used L? regularization of 4 x 10~°. Optimization was performed on minibatches of 24 examples using RMSprop (Tieleman & Hinton, 2012) with learning rate of 0.001, a = 0.9, and e = 107°. Small batch sizes and long sequence lengths provide an ideal situation for demonstrating the QRNNâ s performance advantages over traditional recurrent architectures. We observed a speedup of 3.2x on IMDb train time per epoch compared to the optimized LSTM implementation provided in NVIDIAâ s cuDNN library. | 1611.01576#13 | 1611.01576#15 | 1611.01576 | [
"1605.07725"
]
|
1611.01576#15 | Quasi-Recurrent Neural Networks | For speciï¬ c batch sizes and sequence lengths, a 16x speed gain is possible. Figure 4 provides extensive speed comparisons. In Figure 3, we visualize the hidden state vectors cL t of the ï¬ nal QRNN layer on part of an example from the IMDb dataset. Even without any post-processing, changes in the hidden state are visible and interpretable in regards to the input. This is a consequence of the elementwise nature of the recurrent pooling function, which delays direct interaction between different channels of the hidden state until the computation of the next QRNN layer. 3.2 LANGUAGE MODELING We replicate the language modeling experiment of Zaremba et al. (2014) and Gal & Ghahramani (2016) to benchmark the QRNN architecture for natural language sequence prediction. The experi- ment uses a standard preprocessed version of the Penn Treebank (PTB) by Mikolov et al. (2010). We implemented a gated QRNN model with medium hidden size: 2 layers with 640 units in each layer. Both QRNN layers use a convolutional ï¬ lter width k of two timesteps. While the â mediumâ models used in other work (Zaremba et al., 2014; Gal & Ghahramani, 2016) consist of 650 units in 5 # Under review as a conference paper at ICLR 2017 Under review as a conference paper at ICLR 2017 5 â | 1611.01576#14 | 1611.01576#16 | 1611.01576 | [
"1605.07725"
]
|
1611.01576#16 | Quasi-Recurrent Neural Networks | â â == â a == ee == [zs = â = == â â & 3 aay pF â ee & 3 â a = a a in: â â â â â =$_â â â â â â a 8 â â â â aa â â Se ee â â â â _â â 5 3 = 8 â Ee a a | = 5 ae 8 â â Zz, = a a = Timesteps (words) iS & ell 3 8 s S Hidden units Figure 3: Visualization of the ï¬ nal QRNN layerâ s hidden state vectors cL in the IMDb task, with t timesteps along the vertical axis. Colors denote neuron activations. | 1611.01576#15 | 1611.01576#17 | 1611.01576 | [
"1605.07725"
]
|
1611.01576#17 | Quasi-Recurrent Neural Networks | After an initial positive statement â This movie is simply gorgeousâ (off graph at timestep 9), timestep 117 triggers a reset of most hidden states due to the phrase â not exactly a bad storyâ (soon after â main weakness is its storyâ ). Only at timestep 158, after â I recommend this movie to everyone, even if youâ ve never played the gameâ , do the hidden units recover. each layer, it was more computationally convenient to use a multiple of 32. As the Penn Treebank is a relatively small dataset, preventing overï¬ tting is of considerable importance and a major focus of recent research. It is not obvious in advance which of the many RNN regularization schemes would perform well when applied to the QRNN. Our tests showed encouraging results from zoneout applied to the QRNNâ s recurrent pooling layer, implemented as described in Section 2.1. The experimental settings largely followed the â mediumâ setup of Zaremba et al. (2014). Optimiza- tion was performed by stochastic gradient descent (SGD) without momentum. The learning rate was set at 1 for six epochs, then decayed by 0.95 for each subsequent epoch, for a total of 72 epochs. We additionally used L2 regularization of 2 à 10â 4 and rescaled gradients with norm above 10. Zoneout was applied by performing dropout with ratio 0.1 on the forget gates of the QRNN, without rescaling the output of the dropout function. Batches consist of 20 examples, each 105 timesteps. Comparing our results on the gated QRNN with zoneout to the results of LSTMs with both ordinary and variational dropout in Table 2, we see that the QRNN is highly competitive. The QRNN without zoneout strongly outperforms both our medium LSTM and the medium LSTM of Zaremba et al. (2014) which do not use recurrent dropout and is even competitive with variational LSTMs. This may be due to the limited computational capacity that the QRNNâ s pooling layer has relative to the LSTMâ s recurrent weights, providing structural regularization over the recurrence. Without zoneout, early stopping based upon validation loss was required as the QRNN would be- gin overï¬ tting. | 1611.01576#16 | 1611.01576#18 | 1611.01576 | [
"1605.07725"
]
|
1611.01576#18 | Quasi-Recurrent Neural Networks | By applying a small amount of zoneout (p = 0.1), no early stopping is required and the QRNN achieves competitive levels of perplexity to the variational LSTM of Gal & Ghahra- Model Parameters Validation Test LSTM (medium) (Zaremba et al., 2014) Variational LSTM (medium, MC) (Gal & Ghahramani, 2016) LSTM with CharCNN embeddings (Kim et al., 2016) Zoneout + Variational LSTM (medium) (Merity et al., 2016) 20M 20M 19M 20M 86.2 81.9 â 84.4 82.7 79.7 78.9 80.6 Our models LSTM (medium) QRNN (medium) QRNN + zoneout (p = 0.1) (medium) 20M 18M 18M 85.7 82.9 82.1 82.0 79.9 78.3 Table 2: Single model perplexity on validation and test sets for the Penn Treebank language model- ing task. | 1611.01576#17 | 1611.01576#19 | 1611.01576 | [
"1605.07725"
]
|
1611.01576#19 | Quasi-Recurrent Neural Networks | Lower is better. â Mediumâ refers to a two-layer network with 640 or 650 hidden units per layer. All QRNN models include dropout of 0.5 on embeddings and between layers. MC refers to Monte Carlo dropout averaging at test time. 6 # Under review as a conference paper at ICLR 2017 32 Sequence length 128 64 256 e z i s h c t a B 8 16 32 64 128 256 5.5x 5.5x 4.2x 3.0x 2.1x 1.4x 8.8x 6.7x 4.5x 3.0x 1.9x 1.4x 11.0x 7.8x 4.9x 3.0x 2.0x 1.3x 12.4x 8.3x 4.9x 3.0x 2.0x 1.3x 512 16.9x 10.8x 6.4x 3.7x 2.4x 1.3x = Figure 4: Left: Training speed for two-layer 640-unit PTB LM on a batch of 20 examples of 105 timesteps. â RNNâ and â softmaxâ include the forward and backward times, while â optimization overheadâ includes gradient clipping, L2 regularization, and SGD computations. Right: Inference speed advantage of a 320-unit QRNN layer alone over an equal-sized cuDNN LSTM layer for data with the given batch size and sequence length. Training results are similar. mani (2016), which had variational inference based dropout of 0.2 applied recurrently. Their best performing variation also used Monte Carlo (MC) dropout averaging at test time of 1000 different masks, making it computationally more expensive to run. When training on the PTB dataset with an NVIDIA K40 GPU, we found that the QRNN is sub- stantially faster than a standard LSTM, even when comparing against the optimized cuDNN LSTM. In Figure 4 we provide a breakdown of the time taken for Chainerâ | 1611.01576#18 | 1611.01576#20 | 1611.01576 | [
"1605.07725"
]
|
1611.01576#20 | Quasi-Recurrent Neural Networks | s default LSTM, the cuDNN LSTM, and QRNN to perform a full forward and backward pass on a single batch during training of the RNN LM on PTB. For both LSTM implementations, running time was dominated by the RNN computations, even with the highly optimized cuDNN implementation. For the QRNN implementa- tion, however, the â RNNâ layers are no longer the bottleneck. Indeed, there are diminishing returns from further optimization of the QRNN itself as the softmax and optimization overhead take equal or greater time. Note that the softmax, over a vocabulary size of only 10,000 words, is relatively small; for tasks with larger vocabularies, the softmax would likely dominate computation time. It is also important to note that the cuDNN libraryâ s RNN primitives do not natively support any form of recurrent dropout. That is, running an LSTM that uses a state-of-the-art regularization scheme at cuDNN-like speeds would likely require an entirely custom kernel. 3.3 CHARACTER-LEVEL NEURAL MACHINE TRANSLATION We evaluate the sequence-to-sequence QRNN architecture described in 2.1 on a challenging neu- ral machine translation task, IWSLT Germanâ English spoken-domain translation, applying fully character-level segmentation. This dataset consists of 209,772 sentence pairs of parallel training data from transcribed TED and TEDx presentations, with a mean sentence length of 103 characters for German and 93 for English. We remove training sentences with more than 300 characters in English or German, and use a uniï¬ ed vocabulary of 187 Unicode code points. Our best performance on a development set (TED.tst2013) was achieved using a four-layer encoderâ decoder QRNN with 320 units per layer, no dropout or L? regularization, and gradient rescaling to a maximum magnitude of 5. Inputs were supplied to the encoder reversed, while the encoder convolutions were not masked. The first encoder layer used convolutional filter width k = 6, while the other encoder layers used k = 2. Optimization was performed for 10 epochs on minibatches of 16 examples using Adam (Kingma & Ba, 2014) with a = 0.001, 6; = 0.9, 62 = 0.999, and â ¬ = 10-8. | 1611.01576#19 | 1611.01576#21 | 1611.01576 | [
"1605.07725"
]
|
1611.01576#21 | Quasi-Recurrent Neural Networks | Decoding was performed using beam search with beam width 8 and length normalization a = 0.6. The modified log-probability ranking criterion is provided in the appendix. Results using this architecture were compared to an equal-sized four-layer encoderâ decoder LSTM with attention, applying dropout of 0.2. We again optimized using Adam; other hyperparameters were equal to their values for the QRNN and the same beam search procedure was applied. Table 3 shows that the QRNN outperformed the character-level LSTM, almost matching the performance of a word-level attentional baseline. | 1611.01576#20 | 1611.01576#22 | 1611.01576 | [
"1605.07725"
]
|
1611.01576#22 | Quasi-Recurrent Neural Networks | 7 # Under review as a conference paper at ICLR 2017 Model Train Time BLEU (TED.tst2014) Word-level LSTM w/attn (Ranzato et al., 2016) Word-level CNN w/attn, input feeding (Wiseman & Rush, 2016) â â 20.2 24.0 Our models Char-level 4-layer LSTM Char-level 4-layer QRNN with k = 6 4.2 hrs/epoch 1.0 hrs/epoch 16.53 19.41 Table 3: Translation performance, measured by BLEU, and train speed in hours per epoch, for the IWSLT German-English spoken language translation task. All models were trained on in-domain data only, and use negative log-likelihood as the training criterion. Our models were trained for 10 epochs. The QRNN model uses k = 2 for all layers other than the ï¬ rst encoder layer. # 4 RELATED WORK Exploring alternatives to traditional RNNs for sequence tasks is a major area of current research. Quasi-recurrent neural networks are related to several such recently described models, especially the strongly-typed recurrent neural networks (T-RNN) introduced by Balduzzi & Ghifary (2016). While the motivation and constraints described in that work are different, Balduzzi & Ghifary (2016)â s concepts of â learnwareâ and â ï¬ rmwareâ | 1611.01576#21 | 1611.01576#23 | 1611.01576 | [
"1605.07725"
]
|
1611.01576#23 | Quasi-Recurrent Neural Networks | parallel our discussion of convolution-like and pooling-like subcomponents. As the use of a fully connected layer for recurrent connections violates the con- straint of â strong typingâ , all strongly-typed RNN architectures (including the T-RNN, T-GRU, and T-LSTM) are also quasi-recurrent. However, some QRNN models (including those with attention or skip-connections) are not â strongly typedâ . In particular, a T-RNN differs from a QRNN as de- scribed in this paper with ï¬ lter size 1 and f -pooling only in the absence of an activation function on z. Similarly, T-GRUs and T-LSTMs differ from QRNNs with ï¬ lter size 2 and fo- or ifo-pooling respectively in that they lack tanh on z and use tanh rather than sigmoid on o. The QRNN is also related to work in hybrid convolutionalâ recurrent models. Zhou et al. (2015) apply CNNs at the word level to generate n-gram features used by an LSTM for text classiï¬ cation. Xiao & Cho (2016) also tackle text classiï¬ cation by applying convolutions at the character level, with a stride to reduce sequence length, then feeding these features into a bidirectional LSTM. A similar approach was taken by Lee et al. (2016) for character-level machine translation. Their modelâ s encoder uses a convolutional layer followed by max-pooling to reduce sequence length, a four-layer highway network, and a bidirectional GRU. The parallelism of the convolutional, pooling, and highway layers allows training speed comparable to subword-level models without hard-coded text segmentation. The QRNN encoderâ decoder model shares the favorable parallelism and path-length properties ex- hibited by the ByteNet (Kalchbrenner et al., 2016), an architecture for character-level machine trans- lation based on residual convolutions over binary trees. Their model was constructed to achieve three desired properties: parallelism, linear-time computational complexity, and short paths between any pair of words in order to better propagate gradient signals. | 1611.01576#22 | 1611.01576#24 | 1611.01576 | [
"1605.07725"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.