doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
1611.01796
3
delayed rewards or other long-term structure are often dif- ficult to solve with flat, monolithic policies, and a long line of prior work has studied methods for learning hier- archical policy representations (Sutton et al., 1999; Diet- terich, 2000; Konidaris & Barto, 2007; Hauser et al., 2008). While unsupervised discovery of these hierarchies is possi- ble (Daniel et al., 2012; Bacon & Precup, 2015), practical approaches often require detailed supervision in the form of explicitly specified high-level actions, subgoals, or be- havioral primitives (Precup, 2000). These depend on state representations simple or structured enough that suitable reward signals can be effectively engineered by hand. This paper describes a framework for learning compos- able deep subpolicies in a multitask setting, guided only by abstract sketches of high-level behavior. General rein- forcement learning algorithms allow agents to solve tasks in complex environments. But tasks featuring extremely 1University of California, Berkeley. Correspondence to: Jacob Andreas <[email protected]>.
1611.01796#3
Modular Multitask Reinforcement Learning with Policy Sketches
We describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement them---specifically not providing the detailed guidance used by much previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motivations). To learn from sketches, we present a model that associates every subtask with a modular subpolicy, and jointly maximizes reward over full task-specific policies by tying parameters across shared subpolicies. Optimization is accomplished via a decoupled actor--critic training objective that facilitates learning common behaviors from multiple dissimilar reward functions. We evaluate the effectiveness of our approach in three environments featuring both discrete and continuous control, and with sparse rewards that can be obtained only after completing a number of high-level subgoals. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learning task-specific or shared policies, while naturally inducing a library of interpretable primitive behaviors that can be recombined to rapidly adapt to new tasks.
http://arxiv.org/pdf/1611.01796
Jacob Andreas, Dan Klein, Sergey Levine
cs.LG, cs.NE
To appear at ICML 2017
null
cs.LG
20161106
20170617
[ { "id": "1606.04695" }, { "id": "1609.07088" }, { "id": "1506.02438" }, { "id": "1511.04834" }, { "id": "1604.06057" } ]
1611.01796
4
1University of California, Berkeley. Correspondence to: Jacob Andreas <[email protected]>. But is such fine-grained supervision actually necessary to achieve the full benefits of hierarchy? Specifically, is it necessary to explicitly ground high-level actions into the representation of the environment? Or is it sufficient to simply inform the learner about the abstract structure of policies, without ever specifying how high-level behaviors should make use of primitive percepts or actions? Proceedings of the 34 th International Conference on Machine Learning, Sydney, Australia, PMLR 70, 2017. Copyright 2017 by the author(s). To answer these questions, we explore a multitask re- learning setting where the learner is pre- inforcement # Th # [ai 113 Modular Multitask Reinforcement Learning with Policy Sketches sented with policy sketches. Policy sketches are short, un- grounded, symbolic representations of a task that describe its component parts, as illustrated in Figure 1. While sym- bols might be shared across tasks (get wood appears in sketches for both the make planks and make sticks tasks), the learner is told nothing about what these symbols mean, in terms of either observations or intermediate rewards.
1611.01796#4
Modular Multitask Reinforcement Learning with Policy Sketches
We describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement them---specifically not providing the detailed guidance used by much previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motivations). To learn from sketches, we present a model that associates every subtask with a modular subpolicy, and jointly maximizes reward over full task-specific policies by tying parameters across shared subpolicies. Optimization is accomplished via a decoupled actor--critic training objective that facilitates learning common behaviors from multiple dissimilar reward functions. We evaluate the effectiveness of our approach in three environments featuring both discrete and continuous control, and with sparse rewards that can be obtained only after completing a number of high-level subgoals. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learning task-specific or shared policies, while naturally inducing a library of interpretable primitive behaviors that can be recombined to rapidly adapt to new tasks.
http://arxiv.org/pdf/1611.01796
Jacob Andreas, Dan Klein, Sergey Levine
cs.LG, cs.NE
To appear at ICML 2017
null
cs.LG
20161106
20170617
[ { "id": "1606.04695" }, { "id": "1609.07088" }, { "id": "1506.02438" }, { "id": "1511.04834" }, { "id": "1604.06057" } ]
1611.01796
5
We present an agent architecture that learns from policy sketches by associating each high-level action with a pa- rameterization of a low-level subpolicy, and jointly op- timizes over concatenated task-specific policies by tying parameters across shared subpolicies. We find that this architecture can use the high-level guidance provided by sketches, without any grounding or concrete definition, to dramatically accelerate learning of complex multi-stage be- haviors. Our experiments indicate that many of the benefits to learning that come from highly detailed low-level su- pervision (e.g. from subgoal rewards) can also be obtained from fairly coarse high-level supervision (i.e. from policy sketches). Crucially, sketches are much easier to produce: they require no modifications to the environment dynam- ics or reward function, and can be easily provided by non- experts. This makes it possible to extend the benefits of hierarchical RL to challenging environments where it may not be possible to specify by hand the details of relevant subtasks. We show that
1611.01796#5
Modular Multitask Reinforcement Learning with Policy Sketches
We describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement them---specifically not providing the detailed guidance used by much previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motivations). To learn from sketches, we present a model that associates every subtask with a modular subpolicy, and jointly maximizes reward over full task-specific policies by tying parameters across shared subpolicies. Optimization is accomplished via a decoupled actor--critic training objective that facilitates learning common behaviors from multiple dissimilar reward functions. We evaluate the effectiveness of our approach in three environments featuring both discrete and continuous control, and with sparse rewards that can be obtained only after completing a number of high-level subgoals. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learning task-specific or shared policies, while naturally inducing a library of interpretable primitive behaviors that can be recombined to rapidly adapt to new tasks.
http://arxiv.org/pdf/1611.01796
Jacob Andreas, Dan Klein, Sergey Levine
cs.LG, cs.NE
To appear at ICML 2017
null
cs.LG
20161106
20170617
[ { "id": "1606.04695" }, { "id": "1609.07088" }, { "id": "1506.02438" }, { "id": "1511.04834" }, { "id": "1604.06057" } ]
1611.01796
6
of hierarchical RL to challenging environments where it may not be possible to specify by hand the details of relevant subtasks. We show that our approach substantially outper- forms purely unsupervised methods that do not provide the learner with any task-specific guidance about how hierar- chies should be deployed, and further that the specific use of sketches to parameterize modular subpolicies makes bet- ter use of sketches than conditioning on them directly.
1611.01796#6
Modular Multitask Reinforcement Learning with Policy Sketches
We describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement them---specifically not providing the detailed guidance used by much previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motivations). To learn from sketches, we present a model that associates every subtask with a modular subpolicy, and jointly maximizes reward over full task-specific policies by tying parameters across shared subpolicies. Optimization is accomplished via a decoupled actor--critic training objective that facilitates learning common behaviors from multiple dissimilar reward functions. We evaluate the effectiveness of our approach in three environments featuring both discrete and continuous control, and with sparse rewards that can be obtained only after completing a number of high-level subgoals. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learning task-specific or shared policies, while naturally inducing a library of interpretable primitive behaviors that can be recombined to rapidly adapt to new tasks.
http://arxiv.org/pdf/1611.01796
Jacob Andreas, Dan Klein, Sergey Levine
cs.LG, cs.NE
To appear at ICML 2017
null
cs.LG
20161106
20170617
[ { "id": "1606.04695" }, { "id": "1609.07088" }, { "id": "1506.02438" }, { "id": "1511.04834" }, { "id": "1604.06057" } ]
1611.01796
7
that are easily recombined. This makes it possible to eval- uate our approach under a variety of different data condi- tions: (1) learning the full collection of tasks jointly via reinforcement, (2) in a zero-shot setting where a policy sketch is available for a held-out task, and (3) in a adapta- tion setting, where sketches are hidden and the agent must learn to adapt a pretrained policy to reuse high-level ac- tions in a new task. In all cases, our approach substantially outperforms previous approaches based on explicit decom- position of the Q function along subtasks (Parr & Russell, 1998; Vogel & Jurafsky, 2010), unsupervised option dis- covery (Bacon & Precup, 2015), and several standard pol- icy gradient baselines. We consider three families of tasks: a 2-D Minecraft- inspired crafting game (Figure 3a), in which the agent must acquire particular resources by finding raw ingredients, combining them together in the proper order, and in some cases building intermediate tools that enable the agent to al- ter the environment itself; a 2-D maze navigation task that requires the agent to collect keys and open doors, and a 3-D locomotion task (Figure 3b) in which a quadrupedal robot must actuate its joints to traverse a narrow winding cliff.
1611.01796#7
Modular Multitask Reinforcement Learning with Policy Sketches
We describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement them---specifically not providing the detailed guidance used by much previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motivations). To learn from sketches, we present a model that associates every subtask with a modular subpolicy, and jointly maximizes reward over full task-specific policies by tying parameters across shared subpolicies. Optimization is accomplished via a decoupled actor--critic training objective that facilitates learning common behaviors from multiple dissimilar reward functions. We evaluate the effectiveness of our approach in three environments featuring both discrete and continuous control, and with sparse rewards that can be obtained only after completing a number of high-level subgoals. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learning task-specific or shared policies, while naturally inducing a library of interpretable primitive behaviors that can be recombined to rapidly adapt to new tasks.
http://arxiv.org/pdf/1611.01796
Jacob Andreas, Dan Klein, Sergey Levine
cs.LG, cs.NE
To appear at ICML 2017
null
cs.LG
20161106
20170617
[ { "id": "1606.04695" }, { "id": "1609.07088" }, { "id": "1506.02438" }, { "id": "1511.04834" }, { "id": "1604.06057" } ]
1611.01796
8
In all tasks, the agent receives a reward only after the final goal is accomplished. For the most challenging tasks, in- volving sequences of four or five high-level actions, a task- specific agent initially following a random policy essen- tially never discovers the reward signal, so these tasks can- not be solved without considering their hierarchical struc- ture. We have released code at http://github.com/ jacobandreas/psketch. The present work may be viewed as an extension of recent approaches for learning compositional deep architectures from structured program descriptors (Andreas et al., 2016; Reed & de Freitas, 2016). Here we focus on learning in in- teractive environments. This extension presents a variety of technical challenges, requiring analogues of these methods that can be trained from sparse, non-differentiable reward signals without demonstrations of desired system behavior. Our contributions are: A general paradigm for multitask, hierarchical, deep reinforcement learning guided by abstract sketches of task-specific policies. A concrete recipe for learning from these sketches, built on a general family of modular deep policy rep- resentations and a multitask actor–critic training ob- jective. The modular structure of our approach, which associates every high-level action symbol with a discrete subpolicy, naturally induces a library of interpretable policy fragments # 2. Related Work
1611.01796#8
Modular Multitask Reinforcement Learning with Policy Sketches
We describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement them---specifically not providing the detailed guidance used by much previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motivations). To learn from sketches, we present a model that associates every subtask with a modular subpolicy, and jointly maximizes reward over full task-specific policies by tying parameters across shared subpolicies. Optimization is accomplished via a decoupled actor--critic training objective that facilitates learning common behaviors from multiple dissimilar reward functions. We evaluate the effectiveness of our approach in three environments featuring both discrete and continuous control, and with sparse rewards that can be obtained only after completing a number of high-level subgoals. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learning task-specific or shared policies, while naturally inducing a library of interpretable primitive behaviors that can be recombined to rapidly adapt to new tasks.
http://arxiv.org/pdf/1611.01796
Jacob Andreas, Dan Klein, Sergey Levine
cs.LG, cs.NE
To appear at ICML 2017
null
cs.LG
20161106
20170617
[ { "id": "1606.04695" }, { "id": "1609.07088" }, { "id": "1506.02438" }, { "id": "1511.04834" }, { "id": "1604.06057" } ]
1611.01796
9
# 2. Related Work The agent representation we describe in this paper be- longs to the broader family of hierarchical reinforcement learners. As detailed in Section 3, our approach may be viewed as an instantiation of the options framework first described by Sutton et al. (1999). A large body of work describes techniques for learning options and related ab- stract actions, in both single- and multitask settings. Most techniques for learning options rely on intermediate su- pervisory signals, e.g. to encourage exploration (Kearns & Singh, 2002) or completion of pre-defined subtasks (Kulka- rni et al., 2016). An alternative family of approaches em- ploys post-hoc analysis of demonstrations or pretrained policies to extract reusable sub-components (Stolle & Pre- cup, 2002; Konidaris et al., 2011; Niekum et al., 2015). Techniques for learning options with less guidance than the present work include Bacon & Precup (2015) and Vezhn- evets et al. (2016), and other general hierarchical policy learners include Daniel et al. (2012), Bakker & Schmidhu- ber (2004) and Menache et al. (2002). We will see that the minimal supervision provided by policy sketches reModular Multitask Reinforcement Learning with Policy Sketches
1611.01796#9
Modular Multitask Reinforcement Learning with Policy Sketches
We describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement them---specifically not providing the detailed guidance used by much previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motivations). To learn from sketches, we present a model that associates every subtask with a modular subpolicy, and jointly maximizes reward over full task-specific policies by tying parameters across shared subpolicies. Optimization is accomplished via a decoupled actor--critic training objective that facilitates learning common behaviors from multiple dissimilar reward functions. We evaluate the effectiveness of our approach in three environments featuring both discrete and continuous control, and with sparse rewards that can be obtained only after completing a number of high-level subgoals. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learning task-specific or shared policies, while naturally inducing a library of interpretable primitive behaviors that can be recombined to rapidly adapt to new tasks.
http://arxiv.org/pdf/1611.01796
Jacob Andreas, Dan Klein, Sergey Levine
cs.LG, cs.NE
To appear at ICML 2017
null
cs.LG
20161106
20170617
[ { "id": "1606.04695" }, { "id": "1609.07088" }, { "id": "1506.02438" }, { "id": "1511.04834" }, { "id": "1604.06057" } ]
1611.01796
10
sults in (sometimes dramatic) improvements over fully un- supervised approaches, while being substantially less oner- ous for humans to provide compared to the grounded su- pervision (such as explicit subgoals or feature abstraction hierarchies) used in previous work. rather than direct supervision. Another closely related fam- ily of models includes neural programmers (Neelakantan et al., 2015) and programmer–interpreters (Reed & de Fre- itas, 2016), which generate discrete computational struc- tures but require supervision in the form of output actions or full execution traces. Once a collection of high-level actions exists, agents are faced with the problem of learning meta-level (typically semi-Markov) policies that invoke appropriate high-level actions in sequence (Precup, 2000). The learning problem we describe in this paper is in some sense the direct dual to the problem of learning these meta-level policies: there, the agent begins with an inventory of complex primitives and must learn to model their behavior and select among them; here we begin knowing the names of appropriate high-level actions but nothing about how they are imple- mented, and must infer implementations (but not, initially, abstract plans) from context. Our model can be combined with these approaches to support a “mixed” supervision condition where sketches are available for some tasks but not others (Section 4.5).
1611.01796#10
Modular Multitask Reinforcement Learning with Policy Sketches
We describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement them---specifically not providing the detailed guidance used by much previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motivations). To learn from sketches, we present a model that associates every subtask with a modular subpolicy, and jointly maximizes reward over full task-specific policies by tying parameters across shared subpolicies. Optimization is accomplished via a decoupled actor--critic training objective that facilitates learning common behaviors from multiple dissimilar reward functions. We evaluate the effectiveness of our approach in three environments featuring both discrete and continuous control, and with sparse rewards that can be obtained only after completing a number of high-level subgoals. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learning task-specific or shared policies, while naturally inducing a library of interpretable primitive behaviors that can be recombined to rapidly adapt to new tasks.
http://arxiv.org/pdf/1611.01796
Jacob Andreas, Dan Klein, Sergey Levine
cs.LG, cs.NE
To appear at ICML 2017
null
cs.LG
20161106
20170617
[ { "id": "1606.04695" }, { "id": "1609.07088" }, { "id": "1506.02438" }, { "id": "1511.04834" }, { "id": "1604.06057" } ]
1611.01796
11
Another closely related line of work is the Hierarchical Abstract Machines (HAM) framework introduced by Parr & Russell (1998). Like our approach, HAMs begin with a representation of a high-level policy as an automaton (or a more general computer program; Andre & Russell, 2001; Marthi et al., 2004) and use reinforcement learn- ing to fill in low-level details. Because these approaches attempt to learn a single representation of the Q function for all subtasks and contexts, they require extremely strong formal assumptions about the form of the reward function and state representation (Andre & Russell, 2002) that the present work avoids by decoupling the policy representa- tion from the value function. They perform less effectively when applied to arbitrary state representations where these assumptions do not hold (Section 4.3). We are addition- ally unaware of past work showing that HAM automata can be automatically inferred for new tasks given a pre-trained model, while here we show that it is easy to solve the cor- responding problem for sketch followers (Section 4.5).
1611.01796#11
Modular Multitask Reinforcement Learning with Policy Sketches
We describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement them---specifically not providing the detailed guidance used by much previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motivations). To learn from sketches, we present a model that associates every subtask with a modular subpolicy, and jointly maximizes reward over full task-specific policies by tying parameters across shared subpolicies. Optimization is accomplished via a decoupled actor--critic training objective that facilitates learning common behaviors from multiple dissimilar reward functions. We evaluate the effectiveness of our approach in three environments featuring both discrete and continuous control, and with sparse rewards that can be obtained only after completing a number of high-level subgoals. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learning task-specific or shared policies, while naturally inducing a library of interpretable primitive behaviors that can be recombined to rapidly adapt to new tasks.
http://arxiv.org/pdf/1611.01796
Jacob Andreas, Dan Klein, Sergey Levine
cs.LG, cs.NE
To appear at ICML 2017
null
cs.LG
20161106
20170617
[ { "id": "1606.04695" }, { "id": "1609.07088" }, { "id": "1506.02438" }, { "id": "1511.04834" }, { "id": "1604.06057" } ]
1611.01796
12
Our approach is also inspired by a number of recent efforts toward compositional reasoning and interaction with struc- tured deep models. Such models have been previously used for tasks involving question answering (Iyyer et al., 2014; Andreas et al., 2016) and relational reasoning (Socher et al., 2012), and more recently for multi-task, multi-robot trans- fer problems (Devin et al., 2016). In the present work—as in existing approaches employing dynamically assembled modular networks—task-specific training signals are prop- agated through a collection of composed discrete structures with tied weights. Here the composed structures spec- ify time-varying policies rather than feedforward computa- tions, and their parameters must be learned via interaction
1611.01796#12
Modular Multitask Reinforcement Learning with Policy Sketches
We describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement them---specifically not providing the detailed guidance used by much previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motivations). To learn from sketches, we present a model that associates every subtask with a modular subpolicy, and jointly maximizes reward over full task-specific policies by tying parameters across shared subpolicies. Optimization is accomplished via a decoupled actor--critic training objective that facilitates learning common behaviors from multiple dissimilar reward functions. We evaluate the effectiveness of our approach in three environments featuring both discrete and continuous control, and with sparse rewards that can be obtained only after completing a number of high-level subgoals. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learning task-specific or shared policies, while naturally inducing a library of interpretable primitive behaviors that can be recombined to rapidly adapt to new tasks.
http://arxiv.org/pdf/1611.01796
Jacob Andreas, Dan Klein, Sergey Levine
cs.LG, cs.NE
To appear at ICML 2017
null
cs.LG
20161106
20170617
[ { "id": "1606.04695" }, { "id": "1609.07088" }, { "id": "1506.02438" }, { "id": "1511.04834" }, { "id": "1604.06057" } ]
1611.01796
13
We view the problem of learning from policy sketches as complementary to the instruction following problem stud- ied in the natural language processing literature. Existing work on instruction following focuses on mapping from natural language strings to symbolic action sequences that are then executed by a hard-coded interpreter (Branavan et al., 2009; Chen & Mooney, 2011; Artzi & Zettlemoyer, 2013; Tellex et al., 2011). Here, by contrast, we focus on learning to execute complex actions given symbolic repre- sentations as a starting point. Instruction following models may be viewed as joint policies over instructions and en- vironment observations (so their behavior is not defined in the absence of instructions), while the model described in this paper naturally supports adaptation to tasks where no sketches are available. We expect that future work might combine the two lines of research, bootstrapping policy learning directly from natural language hints rather than the semi-structured sketches used here. # 3. Learning Modular Policies from Sketches
1611.01796#13
Modular Multitask Reinforcement Learning with Policy Sketches
We describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement them---specifically not providing the detailed guidance used by much previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motivations). To learn from sketches, we present a model that associates every subtask with a modular subpolicy, and jointly maximizes reward over full task-specific policies by tying parameters across shared subpolicies. Optimization is accomplished via a decoupled actor--critic training objective that facilitates learning common behaviors from multiple dissimilar reward functions. We evaluate the effectiveness of our approach in three environments featuring both discrete and continuous control, and with sparse rewards that can be obtained only after completing a number of high-level subgoals. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learning task-specific or shared policies, while naturally inducing a library of interpretable primitive behaviors that can be recombined to rapidly adapt to new tasks.
http://arxiv.org/pdf/1611.01796
Jacob Andreas, Dan Klein, Sergey Levine
cs.LG, cs.NE
To appear at ICML 2017
null
cs.LG
20161106
20170617
[ { "id": "1606.04695" }, { "id": "1609.07088" }, { "id": "1506.02438" }, { "id": "1511.04834" }, { "id": "1604.06057" } ]
1611.01796
14
# 3. Learning Modular Policies from Sketches We consider a multitask reinforcement learning prob- lem arising from a family of infinite-horizon discounted Markov decision processes in a shared environment. This environment is specified by a tuple (S,.A, P, 7), with S a set of states, A a set of low-level actions, P:S x AxS— R a transition probability distribution, and 7 a discount fac- tor. Each task + € T is then specified by a pair (R-,p,), with R, : S — R a task-specific reward function and p, : S — Ran initial distribution over states. For a fixed sequence {(s;,a;)} of states and actions obtained from a rollout of a given policy, we will denote the empirical return starting in state 5; as qi == 72,4, 7 ~*~" R(s;). In addi- tion to the components of a standard multitask RL problem, we assume that tasks are annotated with sketches K,, each consisting of a sequence (b,1,b;2,...) of high-level sym- bolic labels drawn from a fixed vocabulary B. # B # 3.1. Model
1611.01796#14
Modular Multitask Reinforcement Learning with Policy Sketches
We describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement them---specifically not providing the detailed guidance used by much previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motivations). To learn from sketches, we present a model that associates every subtask with a modular subpolicy, and jointly maximizes reward over full task-specific policies by tying parameters across shared subpolicies. Optimization is accomplished via a decoupled actor--critic training objective that facilitates learning common behaviors from multiple dissimilar reward functions. We evaluate the effectiveness of our approach in three environments featuring both discrete and continuous control, and with sparse rewards that can be obtained only after completing a number of high-level subgoals. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learning task-specific or shared policies, while naturally inducing a library of interpretable primitive behaviors that can be recombined to rapidly adapt to new tasks.
http://arxiv.org/pdf/1611.01796
Jacob Andreas, Dan Klein, Sergey Levine
cs.LG, cs.NE
To appear at ICML 2017
null
cs.LG
20161106
20170617
[ { "id": "1606.04695" }, { "id": "1609.07088" }, { "id": "1506.02438" }, { "id": "1511.04834" }, { "id": "1604.06057" } ]
1611.01796
15
# B # 3.1. Model We exploit the structural information provided by sketches by constructing for each symbol b a corresponding subpol- icy πb. By sharing each subpolicy across all tasks annotated with the corresponding symbol, our approach naturally learns the shared abstraction for the corresponding subtask, without requiring any information about the grounding of that task to be explicitly specified by annotation. Modular Multitask Reinforcement Learning with Policy Sketches Algorithm 1 TRAIN-STEP(Π, curriculum) 1: 2: while 3: 4: 5: 6: 7: 8: // update parameters do 9: for b 10: 11: 12: 13: 14: D ← ∅ |D| // sample task τ from curriculum (Section 3.3) τ // do rollout d = ) · ∼ { D ← D ∪ } ∼ B,r € T do # ∈ T ∈ B { = 7} # d= //update subpolicy -%+3N, // update critic one Ie + H Da # ∈ D c,(s;)) # log m(ai|si)) (4 (Ver(si)) (gi — er
1611.01796#15
Modular Multitask Reinforcement Learning with Policy Sketches
We describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement them---specifically not providing the detailed guidance used by much previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motivations). To learn from sketches, we present a model that associates every subtask with a modular subpolicy, and jointly maximizes reward over full task-specific policies by tying parameters across shared subpolicies. Optimization is accomplished via a decoupled actor--critic training objective that facilitates learning common behaviors from multiple dissimilar reward functions. We evaluate the effectiveness of our approach in three environments featuring both discrete and continuous control, and with sparse rewards that can be obtained only after completing a number of high-level subgoals. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learning task-specific or shared policies, while naturally inducing a library of interpretable primitive behaviors that can be recombined to rapidly adapt to new tasks.
http://arxiv.org/pdf/1611.01796
Jacob Andreas, Dan Klein, Sergey Levine
cs.LG, cs.NE
To appear at ICML 2017
null
cs.LG
20161106
20170617
[ { "id": "1606.04695" }, { "id": "1609.07088" }, { "id": "1506.02438" }, { "id": "1511.04834" }, { "id": "1604.06057" } ]
1611.01796
16
← er (8i)) 14s one Ie + H Da (Ver(si)) (gi — er (8i)) ∇ ← − Algorithm 2 TRAIN-LOOP() 1: // initialize subpolicies randomly 2: TI = INIT() 3: lmax < 1 5 Tmin <- —0O 6: / initialize €nax-step curriculum 1 Tl ={r€T:|K,| < imax} 8: curriculum(-) = Unif(7’) 9: while rmin < Tgooa do 10: // update parameters (Algorithm 11: TRAIN-STEP(II, curriculum) 12: curriculum(r) « I[7 € 13: Tmin <— Minze7 Er, 14: bmax < €max + 1 Tmin <- —0O / initialize €nax-step curriculum uniformly Tl ={r€T:|K,| < imax} curriculum(-) = Unif(7’) while rmin < Tgooa do # imax} | // update parameters (Algorithm 1) TRAIN-STEP(II, curriculum) curriculum(r) « I[7 € T'|(lL—Er,) Tmin <— Minze7 Er, < €max + 1 # ˆErτ ) « I[7 Minze7 Er,
1611.01796#16
Modular Multitask Reinforcement Learning with Policy Sketches
We describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement them---specifically not providing the detailed guidance used by much previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motivations). To learn from sketches, we present a model that associates every subtask with a modular subpolicy, and jointly maximizes reward over full task-specific policies by tying parameters across shared subpolicies. Optimization is accomplished via a decoupled actor--critic training objective that facilitates learning common behaviors from multiple dissimilar reward functions. We evaluate the effectiveness of our approach in three environments featuring both discrete and continuous control, and with sparse rewards that can be obtained only after completing a number of high-level subgoals. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learning task-specific or shared policies, while naturally inducing a library of interpretable primitive behaviors that can be recombined to rapidly adapt to new tasks.
http://arxiv.org/pdf/1611.01796
Jacob Andreas, Dan Klein, Sergey Levine
cs.LG, cs.NE
To appear at ICML 2017
null
cs.LG
20161106
20170617
[ { "id": "1606.04695" }, { "id": "1609.07088" }, { "id": "1506.02438" }, { "id": "1511.04834" }, { "id": "1604.06057" } ]
1611.01796
17
# ˆErτ ) « I[7 Minze7 Er, 12: curriculum(r) « I[7 € T'|(lL—Er,) WreT # τ ∀ # ∈ T − # €max + 1 # bmax < # ∈ T At each timestep, a subpolicy may select either a low-level action a or a special STOP action. We denote the ∈ A + := augmented state space . At a high } level, this framework is agnostic to the implementation of subpolicies: any function that takes a representation of the current state onto a distribution over over all θb to maximize expected discounted reward JM) = 9° IM) = SOEs an, [D0 Re (s1)] across all tasks τ . # ∈ T # A In this paper, we focus on the case where each πb is rep- resented as a neural network.1 These subpolicies may be viewed as options of the kind described by Sutton et al. (1999), with the key distinction that they have no initiation semantics, but are instead invokable everywhere, and have no explicit representation as a function from an initial state to a distribution over final states (instead implicitly using the STOP action to terminate). # 3.2. Policy Optimization
1611.01796#17
Modular Multitask Reinforcement Learning with Policy Sketches
We describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement them---specifically not providing the detailed guidance used by much previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motivations). To learn from sketches, we present a model that associates every subtask with a modular subpolicy, and jointly maximizes reward over full task-specific policies by tying parameters across shared subpolicies. Optimization is accomplished via a decoupled actor--critic training objective that facilitates learning common behaviors from multiple dissimilar reward functions. We evaluate the effectiveness of our approach in three environments featuring both discrete and continuous control, and with sparse rewards that can be obtained only after completing a number of high-level subgoals. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learning task-specific or shared policies, while naturally inducing a library of interpretable primitive behaviors that can be recombined to rapidly adapt to new tasks.
http://arxiv.org/pdf/1611.01796
Jacob Andreas, Dan Klein, Sergey Levine
cs.LG, cs.NE
To appear at ICML 2017
null
cs.LG
20161106
20170617
[ { "id": "1606.04695" }, { "id": "1609.07088" }, { "id": "1506.02438" }, { "id": "1511.04834" }, { "id": "1604.06057" } ]
1611.01796
18
# 3.2. Policy Optimization Here that optimization is accomplished via a simple decou- pled actor–critic method. In a standard policy gradient ap- proach, with a single policy π with parameters θ, we com- pute gradient steps of the form (Williams, 1992): VoI(m) = > (Vo log m(ai|si)) (ai _ e(si)), (1) a ∇ Given a fixed sketch (b1, b2, . . . ), a task-specific policy Πτ is formed by concatenating its associated subpolicies in se- quence. In particular, the high-level policy maintains a sub- policy index i (initially 0), and executes actions from πbi until the STOP symbol is emitted, at which point control is passed to πbi+1. We may thus think of Πτ as inducing a , with transitions: Markov chain over the state space
1611.01796#18
Modular Multitask Reinforcement Learning with Policy Sketches
We describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement them---specifically not providing the detailed guidance used by much previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motivations). To learn from sketches, we present a model that associates every subtask with a modular subpolicy, and jointly maximizes reward over full task-specific policies by tying parameters across shared subpolicies. Optimization is accomplished via a decoupled actor--critic training objective that facilitates learning common behaviors from multiple dissimilar reward functions. We evaluate the effectiveness of our approach in three environments featuring both discrete and continuous control, and with sparse rewards that can be obtained only after completing a number of high-level subgoals. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learning task-specific or shared policies, while naturally inducing a library of interpretable primitive behaviors that can be recombined to rapidly adapt to new tasks.
http://arxiv.org/pdf/1611.01796
Jacob Andreas, Dan Klein, Sergey Levine
cs.LG, cs.NE
To appear at ICML 2017
null
cs.LG
20161106
20170617
[ { "id": "1606.04695" }, { "id": "1609.07088" }, { "id": "1506.02438" }, { "id": "1511.04834" }, { "id": "1604.06057" } ]
1611.01796
19
where the baseline or “critic” c can be chosen indepen- dently of the future without introducing bias into the gra- dient. Recalling our previous definition of qi as the empir- ical return starting from si, this form of the gradient cor- responds to a generalized advantage estimator (Schulman et al., 2015a) with λ = 1. Here c achieves close to the optimal variance (Greensmith et al., 2004) when it is set # S × B a∈Aπbi(a | s) (s, bi) — (s’, bi) — (8, bi41) with pr. )),<47,(als) - P(s’ with pr. 7», (STOP|s) 8,@) → | Note that II, is semi-Markov with respect to projection of the augmented state space S x B onto the underlying state space S. We denote the complete family of task-specific policies II := {I}, and let each 7, be an arbitrary function of the current environment state parameterized by some weight vector ,. The learning problem is to optimize
1611.01796#19
Modular Multitask Reinforcement Learning with Policy Sketches
We describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement them---specifically not providing the detailed guidance used by much previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motivations). To learn from sketches, we present a model that associates every subtask with a modular subpolicy, and jointly maximizes reward over full task-specific policies by tying parameters across shared subpolicies. Optimization is accomplished via a decoupled actor--critic training objective that facilitates learning common behaviors from multiple dissimilar reward functions. We evaluate the effectiveness of our approach in three environments featuring both discrete and continuous control, and with sparse rewards that can be obtained only after completing a number of high-level subgoals. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learning task-specific or shared policies, while naturally inducing a library of interpretable primitive behaviors that can be recombined to rapidly adapt to new tasks.
http://arxiv.org/pdf/1611.01796
Jacob Andreas, Dan Klein, Sergey Levine
cs.LG, cs.NE
To appear at ICML 2017
null
cs.LG
20161106
20170617
[ { "id": "1606.04695" }, { "id": "1609.07088" }, { "id": "1506.02438" }, { "id": "1511.04834" }, { "id": "1604.06057" } ]
1611.01796
20
1 For ease of presentation, this section assumes that these sub- policy networks are independently parameterized. As described in Section 4.2, it is also possible to share parameters between sub- policies, and introduce discrete subtask structure by way of an embedding of each symbol b. Figure 2: Model overview. Each subpolicy π is uniquely associ- ated with a symbol b implemented as a neural network that maps from a state si to distributions over A+, and chooses an action ai by sampling from this distribution. Whenever the STOP action is sampled, control advances to the next subpolicy in the sketch. Modular Multitask Reinforcement Learning with Policy Sketches exactly equal to the state-value function Vπ(si) = Eπqi for the target policy π starting in state si.
1611.01796#20
Modular Multitask Reinforcement Learning with Policy Sketches
We describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement them---specifically not providing the detailed guidance used by much previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motivations). To learn from sketches, we present a model that associates every subtask with a modular subpolicy, and jointly maximizes reward over full task-specific policies by tying parameters across shared subpolicies. Optimization is accomplished via a decoupled actor--critic training objective that facilitates learning common behaviors from multiple dissimilar reward functions. We evaluate the effectiveness of our approach in three environments featuring both discrete and continuous control, and with sparse rewards that can be obtained only after completing a number of high-level subgoals. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learning task-specific or shared policies, while naturally inducing a library of interpretable primitive behaviors that can be recombined to rapidly adapt to new tasks.
http://arxiv.org/pdf/1611.01796
Jacob Andreas, Dan Klein, Sergey Levine
cs.LG, cs.NE
To appear at ICML 2017
null
cs.LG
20161106
20170617
[ { "id": "1606.04695" }, { "id": "1609.07088" }, { "id": "1506.02438" }, { "id": "1511.04834" }, { "id": "1604.06057" } ]
1611.01796
21
exactly equal to the state-value function Vπ(si) = Eπqi for the target policy π starting in state si. The situation becomes slightly more complicated when generalizing to modular policies built by sequencing sub- policies. In this case, we will have one subpolicy per sym- bol but one critic per task. This is because subpolicies πb might participate in a number of composed policies Πτ , each associated with its own reward function Rτ . Thus in- dividual subpolicies are not uniquely identified with value functions, and the aforementioned subpolicy-specific state- value estimator is no longer well-defined. We extend the actor–critic method to incorporate the decoupling of poli- cies from value functions by allowing the critic to vary per- sample (that is, per-task-and-timestep) depending on the reward function with which the sample is associated. Not- θb J(Πτ ), i.e. the sum of ing that t:b∈Kτ ∇ gradients of expected rewards across all tasks in which πb participates, we have: Vo (Il) = }> VoJ(I-) = a (Va, log m5(azilSri)) (Gi — er (Sri), (2)
1611.01796#21
Modular Multitask Reinforcement Learning with Policy Sketches
We describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement them---specifically not providing the detailed guidance used by much previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motivations). To learn from sketches, we present a model that associates every subtask with a modular subpolicy, and jointly maximizes reward over full task-specific policies by tying parameters across shared subpolicies. Optimization is accomplished via a decoupled actor--critic training objective that facilitates learning common behaviors from multiple dissimilar reward functions. We evaluate the effectiveness of our approach in three environments featuring both discrete and continuous control, and with sparse rewards that can be obtained only after completing a number of high-level subgoals. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learning task-specific or shared policies, while naturally inducing a library of interpretable primitive behaviors that can be recombined to rapidly adapt to new tasks.
http://arxiv.org/pdf/1611.01796
Jacob Andreas, Dan Klein, Sergey Levine
cs.LG, cs.NE
To appear at ICML 2017
null
cs.LG
20161106
20170617
[ { "id": "1606.04695" }, { "id": "1609.07088" }, { "id": "1506.02438" }, { "id": "1511.04834" }, { "id": "1604.06057" } ]
1611.01796
22
Vo (Il) = }> VoJ(I-) = a (Va, log m5(azilSri)) (Gi — er (Sri), (2) where each state-action pair (sτ i, aτ i) was selected by the subpolicy πb in the context of the task τ . these steps, which is driven by a curriculum learning pro- cedure, is specified in Algorithm 2.) This is an on-policy algorithm. In each step, the agent samples tasks from a task distribution provided by a curriculum (described in the fol- lowing subsection). The current family of policies Π is used to perform rollouts in each sampled task, accumulat- ing the resulting tuples of (states, low-level actions, high- level symbols, rewards, and task identities) into a dataset . D reaches a maximum size D, it is used to compute Once gradients w.r.t. both policy and critic parameters, and the parameter vectors are updated accordingly. The step sizes α and β in Algorithm 1 can be chosen adaptively using any first-order method. # 3.3. Curriculum Learning
1611.01796#22
Modular Multitask Reinforcement Learning with Policy Sketches
We describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement them---specifically not providing the detailed guidance used by much previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motivations). To learn from sketches, we present a model that associates every subtask with a modular subpolicy, and jointly maximizes reward over full task-specific policies by tying parameters across shared subpolicies. Optimization is accomplished via a decoupled actor--critic training objective that facilitates learning common behaviors from multiple dissimilar reward functions. We evaluate the effectiveness of our approach in three environments featuring both discrete and continuous control, and with sparse rewards that can be obtained only after completing a number of high-level subgoals. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learning task-specific or shared policies, while naturally inducing a library of interpretable primitive behaviors that can be recombined to rapidly adapt to new tasks.
http://arxiv.org/pdf/1611.01796
Jacob Andreas, Dan Klein, Sergey Levine
cs.LG, cs.NE
To appear at ICML 2017
null
cs.LG
20161106
20170617
[ { "id": "1606.04695" }, { "id": "1609.07088" }, { "id": "1506.02438" }, { "id": "1511.04834" }, { "id": "1604.06057" } ]
1611.01796
23
# 3.3. Curriculum Learning For complex tasks, like the one depicted in Figure 3b, it is difficult for the agent to discover any states with positive reward until many subpolicy behaviors have already been learned. It is thus a better use of the learner’s time to focus on “easy” tasks, where many rollouts will result in high reward from which appropriate subpolicy behavior can be inferred. But there is a fundamental tradeoff involved here: if the learner spends too much time on easy tasks before being made aware of the existence of harder ones, it may overfit and learn subpolicies that no longer generalize or exhibit the desired structural properties. Now minimization of the gradient variance requires that each cτ actually depend on the task identity. (This fol- lows immediately by applying the corresponding argument in Greensmith et al. (2004) individually to each term in the sum over τ in Equation 2.) Because the value function is itself unknown, an approximation must be estimated from data. Here we allow these cτ to be implemented with an arbitrary function approximator with parameters ητ . This is trained to minimize a squared error criterion, with gradi- ents given by Vn. [5 Lae)? | = » (Vner(si)) (gi — er(si))- GB)
1611.01796#23
Modular Multitask Reinforcement Learning with Policy Sketches
We describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement them---specifically not providing the detailed guidance used by much previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motivations). To learn from sketches, we present a model that associates every subtask with a modular subpolicy, and jointly maximizes reward over full task-specific policies by tying parameters across shared subpolicies. Optimization is accomplished via a decoupled actor--critic training objective that facilitates learning common behaviors from multiple dissimilar reward functions. We evaluate the effectiveness of our approach in three environments featuring both discrete and continuous control, and with sparse rewards that can be obtained only after completing a number of high-level subgoals. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learning task-specific or shared policies, while naturally inducing a library of interpretable primitive behaviors that can be recombined to rapidly adapt to new tasks.
http://arxiv.org/pdf/1611.01796
Jacob Andreas, Dan Klein, Sergey Levine
cs.LG, cs.NE
To appear at ICML 2017
null
cs.LG
20161106
20170617
[ { "id": "1606.04695" }, { "id": "1609.07088" }, { "id": "1506.02438" }, { "id": "1511.04834" }, { "id": "1604.06057" } ]
1611.01796
24
Vn. [5 Lae)? | = » (Vner(si)) (gi — er(si))- GB) Alternative forms of the advantage estimator (e.g. the TD residual Rτ (si)+γVτ (si+1) Vτ (si) or any other member − of the generalized advantage estimator family) can be eas- ily substituted by simply maintaining one such estimator per task. Experiments (Section 4.4) show that condition- ing on both the state and the task identity results in notice- able performance improvements, suggesting that the vari- ance reduction provided by this objective is important for efficient joint learning of modular policies.
1611.01796#24
Modular Multitask Reinforcement Learning with Policy Sketches
We describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement them---specifically not providing the detailed guidance used by much previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motivations). To learn from sketches, we present a model that associates every subtask with a modular subpolicy, and jointly maximizes reward over full task-specific policies by tying parameters across shared subpolicies. Optimization is accomplished via a decoupled actor--critic training objective that facilitates learning common behaviors from multiple dissimilar reward functions. We evaluate the effectiveness of our approach in three environments featuring both discrete and continuous control, and with sparse rewards that can be obtained only after completing a number of high-level subgoals. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learning task-specific or shared policies, while naturally inducing a library of interpretable primitive behaviors that can be recombined to rapidly adapt to new tasks.
http://arxiv.org/pdf/1611.01796
Jacob Andreas, Dan Klein, Sergey Levine
cs.LG, cs.NE
To appear at ICML 2017
null
cs.LG
20161106
20170617
[ { "id": "1606.04695" }, { "id": "1609.07088" }, { "id": "1506.02438" }, { "id": "1511.04834" }, { "id": "1604.06057" } ]
1611.01796
25
To avoid both of these problems, we use a curriculum learn- ing scheme (Bengio et al., 2009) that allows the model to smoothly scale up from easy tasks to more difficult ones while avoiding overfitting. Initially the model is pre- sented with tasks associated with short sketches. Once av- erage reward on all these tasks reaches a certain threshold, the length limit is incremented. We assume that rewards across tasks are normalized with maximum achievable re- ward 0 < qi < 1. Let ˆErτ denote the empirical estimate of the expected reward for the current policy on task τ . Then ˆErτ , at each timestep, tasks are sampled in proportion to 1 which by assumption must be positive.
1611.01796#25
Modular Multitask Reinforcement Learning with Policy Sketches
We describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement them---specifically not providing the detailed guidance used by much previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motivations). To learn from sketches, we present a model that associates every subtask with a modular subpolicy, and jointly maximizes reward over full task-specific policies by tying parameters across shared subpolicies. Optimization is accomplished via a decoupled actor--critic training objective that facilitates learning common behaviors from multiple dissimilar reward functions. We evaluate the effectiveness of our approach in three environments featuring both discrete and continuous control, and with sparse rewards that can be obtained only after completing a number of high-level subgoals. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learning task-specific or shared policies, while naturally inducing a library of interpretable primitive behaviors that can be recombined to rapidly adapt to new tasks.
http://arxiv.org/pdf/1611.01796
Jacob Andreas, Dan Klein, Sergey Levine
cs.LG, cs.NE
To appear at ICML 2017
null
cs.LG
20161106
20170617
[ { "id": "1606.04695" }, { "id": "1609.07088" }, { "id": "1506.02438" }, { "id": "1511.04834" }, { "id": "1604.06057" } ]
1611.01796
26
Intuitively, the tasks that provide the strongest learning sig- nal are those in which (1) the agent does not on average achieve reward close to the upper bound, but (2) many episodes result in high reward. The expected reward com- ponent of the curriculum addresses condition (1) by en- suring that time is not spent on nearly solved tasks, while the length bound component of the curriculum addresses condition (2) by ensuring that tasks are not attempted until high-reward episodes are likely to be encountered. Experi- ments show that both components of this curriculum learn- ing scheme improve the rate at which the model converges to a good policy (Section 4.4). The complete procedure for computing a single gradient step is given in Algorithm 1. (The outer training loop over The complete curriculum-based training procedure is spec- ified in Algorithm 2. Initially, the maximum sketch length Modular Multitask Reinforcement Learning with Policy Sketches
1611.01796#26
Modular Multitask Reinforcement Learning with Policy Sketches
We describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement them---specifically not providing the detailed guidance used by much previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motivations). To learn from sketches, we present a model that associates every subtask with a modular subpolicy, and jointly maximizes reward over full task-specific policies by tying parameters across shared subpolicies. Optimization is accomplished via a decoupled actor--critic training objective that facilitates learning common behaviors from multiple dissimilar reward functions. We evaluate the effectiveness of our approach in three environments featuring both discrete and continuous control, and with sparse rewards that can be obtained only after completing a number of high-level subgoals. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learning task-specific or shared policies, while naturally inducing a library of interpretable primitive behaviors that can be recombined to rapidly adapt to new tasks.
http://arxiv.org/pdf/1611.01796
Jacob Andreas, Dan Klein, Sergey Levine
cs.LG, cs.NE
To appear at ICML 2017
null
cs.LG
20161106
20170617
[ { "id": "1606.04695" }, { "id": "1609.07088" }, { "id": "1506.02438" }, { "id": "1511.04834" }, { "id": "1604.06057" } ]
1611.01796
27
The complete curriculum-based training procedure is spec- ified in Algorithm 2. Initially, the maximum sketch length Modular Multitask Reinforcement Learning with Policy Sketches Lmax is set to 1, and the curriculum initialized to sample length-1 tasks uniformly. (Neither of the environments we consider in this paper feature any length-1 tasks; in this case, observe that Algorithm 2 will simply advance to length-2 tasks without any parameter updates.) For each setting of f,x, the algorithm uses the current collection of task policies II to compute and apply the gradient step described in Algorithm 1. The rollouts obtained from the call to TRAIN-STEP can also be used to compute reward estimates fr,; these estimates determine a new task distri- bution for the curriculum. The inner loop is repeated un- til the reward threshold rgooq is exceeded, at which point émax 1S incremented and the process repeated over a (now- expanded) collection of tasks. # 4. Experiments (a) (b) 7: get gold bi: getwood by: get iron bs: use workbench bs: get gold 7: go to goal bi: north K bp: east bs: east
1611.01796#27
Modular Multitask Reinforcement Learning with Policy Sketches
We describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement them---specifically not providing the detailed guidance used by much previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motivations). To learn from sketches, we present a model that associates every subtask with a modular subpolicy, and jointly maximizes reward over full task-specific policies by tying parameters across shared subpolicies. Optimization is accomplished via a decoupled actor--critic training objective that facilitates learning common behaviors from multiple dissimilar reward functions. We evaluate the effectiveness of our approach in three environments featuring both discrete and continuous control, and with sparse rewards that can be obtained only after completing a number of high-level subgoals. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learning task-specific or shared policies, while naturally inducing a library of interpretable primitive behaviors that can be recombined to rapidly adapt to new tasks.
http://arxiv.org/pdf/1611.01796
Jacob Andreas, Dan Klein, Sergey Levine
cs.LG, cs.NE
To appear at ICML 2017
null
cs.LG
20161106
20170617
[ { "id": "1606.04695" }, { "id": "1609.07088" }, { "id": "1506.02438" }, { "id": "1511.04834" }, { "id": "1604.06057" } ]
1611.01796
28
7: get gold bi: getwood by: get iron bs: use workbench bs: get gold 7: go to goal bi: north K bp: east bs: east We evaluate the performance of our approach in three envi- ronments: a crafting environment, a maze navigation en- vironment, and a cliff traversal environment. These en- vironments involve various kinds of challenging low-level control: agents must learn to avoid obstacles, interact with various kinds of objects, and relate fine-grained joint ac- tivation to high-level locomotion goals. They also feature hierarchical structure: most rewards are provided only af- ter the agent has completed two to five high-level actions in the appropriate sequence, without any intermediate goals to indicate progress towards completion.
1611.01796#28
Modular Multitask Reinforcement Learning with Policy Sketches
We describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement them---specifically not providing the detailed guidance used by much previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motivations). To learn from sketches, we present a model that associates every subtask with a modular subpolicy, and jointly maximizes reward over full task-specific policies by tying parameters across shared subpolicies. Optimization is accomplished via a decoupled actor--critic training objective that facilitates learning common behaviors from multiple dissimilar reward functions. We evaluate the effectiveness of our approach in three environments featuring both discrete and continuous control, and with sparse rewards that can be obtained only after completing a number of high-level subgoals. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learning task-specific or shared policies, while naturally inducing a library of interpretable primitive behaviors that can be recombined to rapidly adapt to new tasks.
http://arxiv.org/pdf/1611.01796
Jacob Andreas, Dan Klein, Sergey Levine
cs.LG, cs.NE
To appear at ICML 2017
null
cs.LG
20161106
20170617
[ { "id": "1606.04695" }, { "id": "1609.07088" }, { "id": "1506.02438" }, { "id": "1511.04834" }, { "id": "1604.06057" } ]
1611.01796
29
Figure 3: Examples from the crafting and cliff environments used in this paper. An additional maze environment is also investigated. (a) In the crafting environment, an agent seeking to pick up the gold nugget in the top corner must first collect wood (1) and iron (2), use a workbench to turn them into a bridge (3), and use the (b) In the cliff environment, the bridge to cross the water (4). agent must reach a goal position by traversing a winding sequence of tiles without falling off. Control takes place at the level of individual joint angles; high-level behaviors like “move north” must be learned. # 4.1. Implementation
1611.01796#29
Modular Multitask Reinforcement Learning with Policy Sketches
We describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement them---specifically not providing the detailed guidance used by much previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motivations). To learn from sketches, we present a model that associates every subtask with a modular subpolicy, and jointly maximizes reward over full task-specific policies by tying parameters across shared subpolicies. Optimization is accomplished via a decoupled actor--critic training objective that facilitates learning common behaviors from multiple dissimilar reward functions. We evaluate the effectiveness of our approach in three environments featuring both discrete and continuous control, and with sparse rewards that can be obtained only after completing a number of high-level subgoals. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learning task-specific or shared policies, while naturally inducing a library of interpretable primitive behaviors that can be recombined to rapidly adapt to new tasks.
http://arxiv.org/pdf/1611.01796
Jacob Andreas, Dan Klein, Sergey Levine
cs.LG, cs.NE
To appear at ICML 2017
null
cs.LG
20161106
20170617
[ { "id": "1606.04695" }, { "id": "1609.07088" }, { "id": "1506.02438" }, { "id": "1511.04834" }, { "id": "1604.06057" } ]
1611.01796
30
# 4.1. Implementation In all our experiments, we implement each subpolicy as a feedforward neural network with ReLU nonlinearities and a hidden layer with 128 hidden units, and each critic as a linear function of the current state. Each subpolicy network receives as input a set of features describing the current state of the environment, and outputs a distribution over actions. The agent acts at every timestep by sampling from this distribution. The gradient steps given in lines 8 and 9 of Algorithm 1 are implemented using RMSPROP (Tiele- man, 2012) with a step size of 0.001 and gradient clipping to a unit norm. We take the batch size D in Algorithm 1 to be 2000, and set γ = 0.9 in both environments. For cur- riculum learning, the improvement threshold rgood is 0.8. # 4.2. Environments The crafting environment (Figure 3a) is inspired by the popular game Minecraft, but is implemented in a discrete 2-D world. The agent may interact with objects in the world by facing them and executing a special USE action. Interacting with raw materials initially scattered around the environment causes them to be added to an inventory. Inter- acting with different crafting stations causes objects in the agent’s inventory to be combined or transformed. Each task
1611.01796#30
Modular Multitask Reinforcement Learning with Policy Sketches
We describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement them---specifically not providing the detailed guidance used by much previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motivations). To learn from sketches, we present a model that associates every subtask with a modular subpolicy, and jointly maximizes reward over full task-specific policies by tying parameters across shared subpolicies. Optimization is accomplished via a decoupled actor--critic training objective that facilitates learning common behaviors from multiple dissimilar reward functions. We evaluate the effectiveness of our approach in three environments featuring both discrete and continuous control, and with sparse rewards that can be obtained only after completing a number of high-level subgoals. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learning task-specific or shared policies, while naturally inducing a library of interpretable primitive behaviors that can be recombined to rapidly adapt to new tasks.
http://arxiv.org/pdf/1611.01796
Jacob Andreas, Dan Klein, Sergey Levine
cs.LG, cs.NE
To appear at ICML 2017
null
cs.LG
20161106
20170617
[ { "id": "1606.04695" }, { "id": "1609.07088" }, { "id": "1506.02438" }, { "id": "1511.04834" }, { "id": "1604.06057" } ]
1611.01796
31
in this game corresponds to some crafted object the agent must produce; the most complicated goals require the agent to also craft intermediate ingredients, and in some cases build tools (like a pickaxe and a bridge) to reach ingredients located in initially inaccessible regions of the environment. The maze environment (not pictured) corresponds closely to the the “light world” described by Konidaris & Barto (2007). The agent is placed in a discrete world consist- ing of a series of rooms, some of which are connected by doors. Some doors require that the agent first pick up a key to open them. For our experiments, each task corre- sponds to a goal room (always at the same position relative to the agent’s starting position) that the agent must reach by navigating through a sequence of intermediate rooms. The agent has one sensor on each side of its body, which reports the distance to keys, closed doors, and open doors in the corresponding direction. Sketches specify a particu- lar sequence of directions for the agent to traverse between rooms to reach the goal. The sketch always corresponds to a viable traversal from the start to the goal position, but other (possibly shorter) traversals may also exist.
1611.01796#31
Modular Multitask Reinforcement Learning with Policy Sketches
We describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement them---specifically not providing the detailed guidance used by much previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motivations). To learn from sketches, we present a model that associates every subtask with a modular subpolicy, and jointly maximizes reward over full task-specific policies by tying parameters across shared subpolicies. Optimization is accomplished via a decoupled actor--critic training objective that facilitates learning common behaviors from multiple dissimilar reward functions. We evaluate the effectiveness of our approach in three environments featuring both discrete and continuous control, and with sparse rewards that can be obtained only after completing a number of high-level subgoals. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learning task-specific or shared policies, while naturally inducing a library of interpretable primitive behaviors that can be recombined to rapidly adapt to new tasks.
http://arxiv.org/pdf/1611.01796
Jacob Andreas, Dan Klein, Sergey Levine
cs.LG, cs.NE
To appear at ICML 2017
null
cs.LG
20161106
20170617
[ { "id": "1606.04695" }, { "id": "1609.07088" }, { "id": "1506.02438" }, { "id": "1511.04834" }, { "id": "1604.06057" } ]
1611.01796
32
The cliff environment (Figure 3b) is intended to demon- strate the applicability of our approach to problems in- volving high-dimensional continuous control. In this en- vironment, a quadrupedal robot (Schulman et al., 2015b) is placed on a variable-length winding path, and must naviModular Multitask Reinforcement Learning with Policy Sketches (a) (b) (c) Figure 4: Comparing modular learning from sketches with standard RL baselines. Modular is the approach described in this paper, while Independent learns a separate policy for each task, Joint learns a shared policy that conditions on the task identity, Q automaton learns a single network to map from states and action symbols to Q values, and Opt–Crit is an unsupervised option learner. Performance for the best iteration of the (off-policy) Q automaton is plotted. Performance is shown in (a) the crafting environment, (b) the maze environment, and (c) the cliff environment. The modular approach is eventually able to achieve high reward on all tasks, while the baseline models perform considerably worse on average.
1611.01796#32
Modular Multitask Reinforcement Learning with Policy Sketches
We describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement them---specifically not providing the detailed guidance used by much previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motivations). To learn from sketches, we present a model that associates every subtask with a modular subpolicy, and jointly maximizes reward over full task-specific policies by tying parameters across shared subpolicies. Optimization is accomplished via a decoupled actor--critic training objective that facilitates learning common behaviors from multiple dissimilar reward functions. We evaluate the effectiveness of our approach in three environments featuring both discrete and continuous control, and with sparse rewards that can be obtained only after completing a number of high-level subgoals. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learning task-specific or shared policies, while naturally inducing a library of interpretable primitive behaviors that can be recombined to rapidly adapt to new tasks.
http://arxiv.org/pdf/1611.01796
Jacob Andreas, Dan Klein, Sergey Levine
cs.LG, cs.NE
To appear at ICML 2017
null
cs.LG
20161106
20170617
[ { "id": "1606.04695" }, { "id": "1609.07088" }, { "id": "1506.02438" }, { "id": "1511.04834" }, { "id": "1604.06057" } ]
1611.01796
33
gate to the end without falling off. This task is designed to provide a substantially more challenging RL problem, due to the fact that the walker must learn the low-level walk- ing skill before it can make any progress, but has simpler hierarchical structure than the crafting environment. The agent receives a small reward for making progress toward the goal, and a large positive reward for reaching the goal square, with a negative reward for falling off the path. A listing of tasks and sketches is given in Appendix A. # 4.3. Multitask Learning The primary experimental question in this paper is whether the extra structure provided by policy sketches alone is enough to enable fast learning of coupled policies across tasks. We aim to explore the differences between the approach described in Section 3 and relevant prior work that performs either unsupervised or weakly supervised multitask learning of hierarchical policy structure. Specifi- cally, we compare our modular to approach to: 1. Structured hierarchical reinforcement learners: The joint and independent models performed best when trained with the same curriculum described in Section 3.3, while the option–critic model performed best with a length–weighted curriculum that has access to all tasks from the beginning of training.
1611.01796#33
Modular Multitask Reinforcement Learning with Policy Sketches
We describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement them---specifically not providing the detailed guidance used by much previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motivations). To learn from sketches, we present a model that associates every subtask with a modular subpolicy, and jointly maximizes reward over full task-specific policies by tying parameters across shared subpolicies. Optimization is accomplished via a decoupled actor--critic training objective that facilitates learning common behaviors from multiple dissimilar reward functions. We evaluate the effectiveness of our approach in three environments featuring both discrete and continuous control, and with sparse rewards that can be obtained only after completing a number of high-level subgoals. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learning task-specific or shared policies, while naturally inducing a library of interpretable primitive behaviors that can be recombined to rapidly adapt to new tasks.
http://arxiv.org/pdf/1611.01796
Jacob Andreas, Dan Klein, Sergey Levine
cs.LG, cs.NE
To appear at ICML 2017
null
cs.LG
20161106
20170617
[ { "id": "1606.04695" }, { "id": "1609.07088" }, { "id": "1506.02438" }, { "id": "1511.04834" }, { "id": "1604.06057" } ]
1611.01796
34
Learning curves for baselines and the modular model are shown in Figure 4. It can be seen that in all environments, our approach substantially outperforms the baselines: it in- duces policies with substantially higher average reward and converges more quickly than the policy gradient baselines. It can further be seen in Figure 4c that after policies have been learned on simple tasks, the model is able to rapidly adapt to more complex ones, even when the longer tasks involve high-level actions not required for any of the short tasks (Appendix A). Having demonstrated the overall effectiveness of our ap- proach, our remaining experiments explore (1) the impor- tance of various components of the training procedure, and (2) the learned models’ ability to generalize or adapt to held-out tasks. For compactness, we restrict our consid- eration on the crafting domain, which features a larger and more diverse range of tasks and high-level actions. (a) the fully unsupervised option–critic algorithm of Bacon & Precup (2015) # 4.4. Ablations (b) a Q automaton that attempts to explicitly repre- sent the Q function for each task / subtask com- bination (essentially a HAM (Andre & Russell, 2002) with a deep state abstraction function)
1611.01796#34
Modular Multitask Reinforcement Learning with Policy Sketches
We describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement them---specifically not providing the detailed guidance used by much previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motivations). To learn from sketches, we present a model that associates every subtask with a modular subpolicy, and jointly maximizes reward over full task-specific policies by tying parameters across shared subpolicies. Optimization is accomplished via a decoupled actor--critic training objective that facilitates learning common behaviors from multiple dissimilar reward functions. We evaluate the effectiveness of our approach in three environments featuring both discrete and continuous control, and with sparse rewards that can be obtained only after completing a number of high-level subgoals. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learning task-specific or shared policies, while naturally inducing a library of interpretable primitive behaviors that can be recombined to rapidly adapt to new tasks.
http://arxiv.org/pdf/1611.01796
Jacob Andreas, Dan Klein, Sergey Levine
cs.LG, cs.NE
To appear at ICML 2017
null
cs.LG
20161106
20170617
[ { "id": "1606.04695" }, { "id": "1609.07088" }, { "id": "1506.02438" }, { "id": "1511.04834" }, { "id": "1604.06057" } ]
1611.01796
35
In addition to the overall modular parameter-tying structure induced by our sketches, the key components of our train- ing procedure are the decoupled critic and the curriculum. Our next experiments investigate the extent to which these are necessary for good performance. 2. Alternative ways of incorporating sketch data into standard policy gradient methods: (c) learning an independent policy for each task (d) learning a joint policy across all tasks, condi- tioning directly on both environment features and a representation of the complete sketch To evaluate the the critic, we consider three ablations: (1) removing the dependence of the model on the environment state, in which case the baseline is a single scalar per task; (2) removing the dependence of the model on the task, in which case the baseline is a conventional generalized ad- vantage estimator; and (3) removing both, in which case Modular Multitask Reinforcement Learning with Policy Sketches (a) (b) (c)
1611.01796#35
Modular Multitask Reinforcement Learning with Policy Sketches
We describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement them---specifically not providing the detailed guidance used by much previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motivations). To learn from sketches, we present a model that associates every subtask with a modular subpolicy, and jointly maximizes reward over full task-specific policies by tying parameters across shared subpolicies. Optimization is accomplished via a decoupled actor--critic training objective that facilitates learning common behaviors from multiple dissimilar reward functions. We evaluate the effectiveness of our approach in three environments featuring both discrete and continuous control, and with sparse rewards that can be obtained only after completing a number of high-level subgoals. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learning task-specific or shared policies, while naturally inducing a library of interpretable primitive behaviors that can be recombined to rapidly adapt to new tasks.
http://arxiv.org/pdf/1611.01796
Jacob Andreas, Dan Klein, Sergey Levine
cs.LG, cs.NE
To appear at ICML 2017
null
cs.LG
20161106
20170617
[ { "id": "1606.04695" }, { "id": "1609.07088" }, { "id": "1506.02438" }, { "id": "1511.04834" }, { "id": "1604.06057" } ]
1611.01796
36
Modular Multitask Reinforcement Learning with Policy Sketches (a) (b) (c) Figure 5: Training details in the crafting domain. (a) Critics: lines labeled “task” include a baseline that varies with task identity, while lines labeled “state” include a baseline that varies with state identity. Estimating a baseline that depends on both the represen- tation of the current state and the identity of the current task is better than either alone or a constant baseline. (b) Curricula: lines labeled “len” use a curriculum with iteratively increasing sketch lengths, while lines labeled “wgt” sample tasks in inverse propor- tion to their current reward. Adjusting the sampling distribution based on both task length and performance return improves con- vergence. (c) Individual task performance. Colors correspond to task length. Sharp steps in the learning curve correspond to in- creases of émax in the curriculum. the baseline is a single scalar, as in a vanilla policy gradient approach. Results are shown in Figure 5a. Introducing both state and task dependence into the baseline leads to faster convergence of the model: the approach with a constant baseline achieves less than half the overall performance of the full critic after 3 million episodes. Introducing task and state dependence independently improve this performance; combining them gives the best result. # wet)
1611.01796#36
Modular Multitask Reinforcement Learning with Policy Sketches
We describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement them---specifically not providing the detailed guidance used by much previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motivations). To learn from sketches, we present a model that associates every subtask with a modular subpolicy, and jointly maximizes reward over full task-specific policies by tying parameters across shared subpolicies. Optimization is accomplished via a decoupled actor--critic training objective that facilitates learning common behaviors from multiple dissimilar reward functions. We evaluate the effectiveness of our approach in three environments featuring both discrete and continuous control, and with sparse rewards that can be obtained only after completing a number of high-level subgoals. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learning task-specific or shared policies, while naturally inducing a library of interpretable primitive behaviors that can be recombined to rapidly adapt to new tasks.
http://arxiv.org/pdf/1611.01796
Jacob Andreas, Dan Klein, Sergey Levine
cs.LG, cs.NE
To appear at ICML 2017
null
cs.LG
20161106
20170617
[ { "id": "1606.04695" }, { "id": "1609.07088" }, { "id": "1506.02438" }, { "id": "1511.04834" }, { "id": "1604.06057" } ]
1611.01796
37
# wet) Model Multitask 0-shot Adaptation Joint Independent Option–Critic Modular (ours) .49 .44 .47 .89 .01 – – .77 – .01 .42 .76 Table 1: Accuracy and generalization of learned models in the crafting domain. The table shows the task completion rate for each approach after convergence under various training condi- tions. Multitask is the multitask training condition described in Section 4.3, while 0-Shot and Adaptation are the generalization experiments described in Section 4.5. Our modular approach con- sistently achieves the best performance.
1611.01796#37
Modular Multitask Reinforcement Learning with Policy Sketches
We describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement them---specifically not providing the detailed guidance used by much previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motivations). To learn from sketches, we present a model that associates every subtask with a modular subpolicy, and jointly maximizes reward over full task-specific policies by tying parameters across shared subpolicies. Optimization is accomplished via a decoupled actor--critic training objective that facilitates learning common behaviors from multiple dissimilar reward functions. We evaluate the effectiveness of our approach in three environments featuring both discrete and continuous control, and with sparse rewards that can be obtained only after completing a number of high-level subgoals. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learning task-specific or shared policies, while naturally inducing a library of interpretable primitive behaviors that can be recombined to rapidly adapt to new tasks.
http://arxiv.org/pdf/1611.01796
Jacob Andreas, Dan Klein, Sergey Levine
cs.LG, cs.NE
To appear at ICML 2017
null
cs.LG
20161106
20170617
[ { "id": "1606.04695" }, { "id": "1609.07088" }, { "id": "1506.02438" }, { "id": "1511.04834" }, { "id": "1604.06057" } ]
1611.01796
38
We hold out two length-four tasks from the full inventory used in Section 4.3, and train on the remaining tasks. For zero-shot experiments, we simply form the concatenated policy described by the sketches of the held-out tasks, and repeatedly execute this policy (without learning) in order to obtain an estimate of its effectiveness. For adaptation ex- periments, we consider ordinary RL over high-level actions , implementing the high- B level learner with the same agent architecture as described in Section 3.1. Note that the Independent and Option– Critic models cannot be applied to the zero-shot evaluation, while the Joint model cannot be applied to the adaptation baseline (because it depends on pre-specified sketch fea- tures). Results are shown in Table 1. The held-out tasks are sufficiently challenging that the baselines are unable to obtain more than negligible reward: in particular, the joint model overfits to the training tasks and cannot generalize to new sketches, while the independent model cannot discover enough of a reward signal to learn in the adaptation setting. The modular model does comparatively well: individual subpolicies
1611.01796#38
Modular Multitask Reinforcement Learning with Policy Sketches
We describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement them---specifically not providing the detailed guidance used by much previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motivations). To learn from sketches, we present a model that associates every subtask with a modular subpolicy, and jointly maximizes reward over full task-specific policies by tying parameters across shared subpolicies. Optimization is accomplished via a decoupled actor--critic training objective that facilitates learning common behaviors from multiple dissimilar reward functions. We evaluate the effectiveness of our approach in three environments featuring both discrete and continuous control, and with sparse rewards that can be obtained only after completing a number of high-level subgoals. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learning task-specific or shared policies, while naturally inducing a library of interpretable primitive behaviors that can be recombined to rapidly adapt to new tasks.
http://arxiv.org/pdf/1611.01796
Jacob Andreas, Dan Klein, Sergey Levine
cs.LG, cs.NE
To appear at ICML 2017
null
cs.LG
20161106
20170617
[ { "id": "1606.04695" }, { "id": "1609.07088" }, { "id": "1506.02438" }, { "id": "1511.04834" }, { "id": "1604.06057" } ]
1611.01796
39
while the independent model cannot discover enough of a reward signal to learn in the adaptation setting. The modular model does comparatively well: individual subpolicies succeed in novel zero-shot configurations (sug- gesting that they have in fact discovered the behavior sug- gested by the semantics of the sketch) and provide a suit- able basis for adaptive discovery of new high-level policies.
1611.01796#39
Modular Multitask Reinforcement Learning with Policy Sketches
We describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement them---specifically not providing the detailed guidance used by much previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motivations). To learn from sketches, we present a model that associates every subtask with a modular subpolicy, and jointly maximizes reward over full task-specific policies by tying parameters across shared subpolicies. Optimization is accomplished via a decoupled actor--critic training objective that facilitates learning common behaviors from multiple dissimilar reward functions. We evaluate the effectiveness of our approach in three environments featuring both discrete and continuous control, and with sparse rewards that can be obtained only after completing a number of high-level subgoals. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learning task-specific or shared policies, while naturally inducing a library of interpretable primitive behaviors that can be recombined to rapidly adapt to new tasks.
http://arxiv.org/pdf/1611.01796
Jacob Andreas, Dan Klein, Sergey Levine
cs.LG, cs.NE
To appear at ICML 2017
null
cs.LG
20161106
20170617
[ { "id": "1606.04695" }, { "id": "1609.07088" }, { "id": "1506.02438" }, { "id": "1511.04834" }, { "id": "1604.06057" } ]
1611.01796
40
We also investigate two aspects of our curriculum learning scheme: starting with short examples and moving to long ones, and sampling tasks in inverse proportion to their ac- cumulated reward. Experiments are shown in Figure 5b. Both components help; prioritization by both length and weight gives the best results. # 4.5. Zero-shot and Adaptation Learning In our final experiments, we consider the model’s ability to generalize beyond the standard training condition. We first consider two tests of generalization: a zero-shot setting, in which the model is provided a sketch for the new task and must immediately achieve good performance, and a adap- tation setting, in which no sketch is provided and the model must learn the form of a suitable sketch via interaction in the new task. # 5. Conclusions
1611.01796#40
Modular Multitask Reinforcement Learning with Policy Sketches
We describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement them---specifically not providing the detailed guidance used by much previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motivations). To learn from sketches, we present a model that associates every subtask with a modular subpolicy, and jointly maximizes reward over full task-specific policies by tying parameters across shared subpolicies. Optimization is accomplished via a decoupled actor--critic training objective that facilitates learning common behaviors from multiple dissimilar reward functions. We evaluate the effectiveness of our approach in three environments featuring both discrete and continuous control, and with sparse rewards that can be obtained only after completing a number of high-level subgoals. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learning task-specific or shared policies, while naturally inducing a library of interpretable primitive behaviors that can be recombined to rapidly adapt to new tasks.
http://arxiv.org/pdf/1611.01796
Jacob Andreas, Dan Klein, Sergey Levine
cs.LG, cs.NE
To appear at ICML 2017
null
cs.LG
20161106
20170617
[ { "id": "1606.04695" }, { "id": "1609.07088" }, { "id": "1506.02438" }, { "id": "1511.04834" }, { "id": "1604.06057" } ]
1611.01796
41
# 5. Conclusions We have described an approach for multitask learning of deep multitask policies guided by symbolic policy sketches. By associating each symbol appearing in a sketch with a modular neural subpolicy, we have shown that it is possible to build agents that share behavior across tasks in order to achieve success in tasks with sparse and delayed rewards. This process induces an inventory of reusable and interpretable subpolicies which can be employed for zero- shot generalization when further sketches are available, and hierarchical reinforcement learning when they are not. Our work suggests that these sketches, which are easy to pro- duce and require no grounding in the environment, provide an effective scaffold for learning hierarchical policies from minimal supervision. Modular Multitask Reinforcement Learning with Policy Sketches # Acknowledgments JA is supported by a Facebook Graduate Fellowship and a Berkeley AI / Huawei Fellowship. Devin, Coline, Gupta, Abhishek, Darrell, Trevor, Abbeel, Pieter, and Levine, Sergey. Learning modular neural network policies for multi-task and multi-robot transfer. arXiv preprint arXiv:1609.07088, 2016. # References Andre, David and Russell, Stuart. Programmable reinforce- ment learning agents. In Advances in Neural Information Processing Systems, 2001.
1611.01796#41
Modular Multitask Reinforcement Learning with Policy Sketches
We describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement them---specifically not providing the detailed guidance used by much previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motivations). To learn from sketches, we present a model that associates every subtask with a modular subpolicy, and jointly maximizes reward over full task-specific policies by tying parameters across shared subpolicies. Optimization is accomplished via a decoupled actor--critic training objective that facilitates learning common behaviors from multiple dissimilar reward functions. We evaluate the effectiveness of our approach in three environments featuring both discrete and continuous control, and with sparse rewards that can be obtained only after completing a number of high-level subgoals. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learning task-specific or shared policies, while naturally inducing a library of interpretable primitive behaviors that can be recombined to rapidly adapt to new tasks.
http://arxiv.org/pdf/1611.01796
Jacob Andreas, Dan Klein, Sergey Levine
cs.LG, cs.NE
To appear at ICML 2017
null
cs.LG
20161106
20170617
[ { "id": "1606.04695" }, { "id": "1609.07088" }, { "id": "1506.02438" }, { "id": "1511.04834" }, { "id": "1604.06057" } ]
1611.01796
42
# References Andre, David and Russell, Stuart. Programmable reinforce- ment learning agents. In Advances in Neural Information Processing Systems, 2001. Andre, David and Russell, Stuart. State abstraction for pro- grammable reinforcement learning agents. In Proceed- ings of the Meeting of the Association for the Advance- ment of Artificial Intelligence, 2002. Andreas, Jacob, Rohrbach, Marcus, Darrell, Trevor, and Klein, Dan. Learning to compose neural networks for question answering. In Proceedings of the Annual Meet- ing of the North American Chapter of the Association for Computational Linguistics, 2016. Artzi, Yoav and Zettlemoyer, Luke. Weakly supervised learning of semantic parsers for mapping instructions to actions. Transactions of the Association for Computa- tional Linguistics, 1(1):49–62, 2013. Dietterich, Thomas G. Hierarchical reinforcement learning with the maxq value function decomposition. J. Artif. Intell. Res. (JAIR), 13:227–303, 2000. Greensmith, Evan, Bartlett, Peter L, and Baxter, Jonathan. Variance reduction techniques for gradient estimates in reinforcement learning. Journal of Machine Learning Research, 5(Nov):1471–1530, 2004.
1611.01796#42
Modular Multitask Reinforcement Learning with Policy Sketches
We describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement them---specifically not providing the detailed guidance used by much previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motivations). To learn from sketches, we present a model that associates every subtask with a modular subpolicy, and jointly maximizes reward over full task-specific policies by tying parameters across shared subpolicies. Optimization is accomplished via a decoupled actor--critic training objective that facilitates learning common behaviors from multiple dissimilar reward functions. We evaluate the effectiveness of our approach in three environments featuring both discrete and continuous control, and with sparse rewards that can be obtained only after completing a number of high-level subgoals. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learning task-specific or shared policies, while naturally inducing a library of interpretable primitive behaviors that can be recombined to rapidly adapt to new tasks.
http://arxiv.org/pdf/1611.01796
Jacob Andreas, Dan Klein, Sergey Levine
cs.LG, cs.NE
To appear at ICML 2017
null
cs.LG
20161106
20170617
[ { "id": "1606.04695" }, { "id": "1609.07088" }, { "id": "1506.02438" }, { "id": "1511.04834" }, { "id": "1604.06057" } ]
1611.01796
43
Hauser, Kris, Bretl, Timothy, Harada, Kensuke, and Latombe, Jean-Claude. Using motion primitives in prob- abilistic sample-based planning for humanoid robots. In Algorithmic foundation of robotics, pp. 507–522. Springer, 2008. Iyyer, Mohit, Boyd-Graber, Jordan, Claudino, Leonardo, Socher, Richard, and Daum´e III, Hal. A neural net- work for factoid question answering over paragraphs. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2014. Bacon, Pierre-Luc and Precup, Doina. The option-critic ar- chitecture. In NIPS Deep Reinforcement Learning Work- shop, 2015. Kearns, Michael and Singh, Satinder. Near-optimal rein- forcement learning in polynomial time. Machine Learn- ing, 49(2-3):209–232, 2002. Bakker, Bram and Schmidhuber, J¨urgen. Hierarchical rein- forcement learning based on subgoal discovery and sub- policy specialization. In Proc. of the 8-th Conf. on Intel- ligent Autonomous Systems, pp. 438–445, 2004.
1611.01796#43
Modular Multitask Reinforcement Learning with Policy Sketches
We describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement them---specifically not providing the detailed guidance used by much previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motivations). To learn from sketches, we present a model that associates every subtask with a modular subpolicy, and jointly maximizes reward over full task-specific policies by tying parameters across shared subpolicies. Optimization is accomplished via a decoupled actor--critic training objective that facilitates learning common behaviors from multiple dissimilar reward functions. We evaluate the effectiveness of our approach in three environments featuring both discrete and continuous control, and with sparse rewards that can be obtained only after completing a number of high-level subgoals. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learning task-specific or shared policies, while naturally inducing a library of interpretable primitive behaviors that can be recombined to rapidly adapt to new tasks.
http://arxiv.org/pdf/1611.01796
Jacob Andreas, Dan Klein, Sergey Levine
cs.LG, cs.NE
To appear at ICML 2017
null
cs.LG
20161106
20170617
[ { "id": "1606.04695" }, { "id": "1609.07088" }, { "id": "1506.02438" }, { "id": "1511.04834" }, { "id": "1604.06057" } ]
1611.01796
44
Bengio, Yoshua, Louradour, J´erˆome, Collobert, Ronan, and In International Weston, Jason. Curriculum learning. Conference on Machine Learning, pp. 41–48. ACM, 2009. Branavan, S.R.K., Chen, Harr, Zettlemoyer, Luke S., and Barzilay, Regina. Reinforcement learning for mapping In Proceedings of the Annual instructions to actions. Meeting of the Association for Computational Linguis- tics, pp. 82–90. Association for Computational Linguis- tics, 2009. Chen, David L. and Mooney, Raymond J. Learning to inter- pret natural language navigation instructions from obser- vations. In Proceedings of the Meeting of the Association for the Advancement of Artificial Intelligence, volume 2, pp. 1–2, 2011. Konidaris, George and Barto, Andrew G. Building portable options: Skill transfer in reinforcement learning. In IJ- CAI, volume 7, pp. 895–900, 2007. Konidaris, George, Kuindersma, Scott, Grupen, Roderic, and Barto, Andrew. Robot learning from demonstration by constructing skill trees. The International Journal of Robotics Research, pp. 0278364911428653, 2011.
1611.01796#44
Modular Multitask Reinforcement Learning with Policy Sketches
We describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement them---specifically not providing the detailed guidance used by much previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motivations). To learn from sketches, we present a model that associates every subtask with a modular subpolicy, and jointly maximizes reward over full task-specific policies by tying parameters across shared subpolicies. Optimization is accomplished via a decoupled actor--critic training objective that facilitates learning common behaviors from multiple dissimilar reward functions. We evaluate the effectiveness of our approach in three environments featuring both discrete and continuous control, and with sparse rewards that can be obtained only after completing a number of high-level subgoals. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learning task-specific or shared policies, while naturally inducing a library of interpretable primitive behaviors that can be recombined to rapidly adapt to new tasks.
http://arxiv.org/pdf/1611.01796
Jacob Andreas, Dan Klein, Sergey Levine
cs.LG, cs.NE
To appear at ICML 2017
null
cs.LG
20161106
20170617
[ { "id": "1606.04695" }, { "id": "1609.07088" }, { "id": "1506.02438" }, { "id": "1511.04834" }, { "id": "1604.06057" } ]
1611.01796
45
Kulkarni, Tejas D, Narasimhan, Karthik R, Saeedi, Arda- van, and Tenenbaum, Joshua B. Hierarchical deep rein- forcement learning: Integrating temporal abstraction and intrinsic motivation. arXiv preprint arXiv:1604.06057, 2016. Marthi, Bhaskara, Lantham, David, Guestrin, Carlos, and Russell, Stuart. Concurrent hierarchical reinforcement learning. In Proceedings of the Meeting of the Associa- tion for the Advancement of Artificial Intelligence, 2004. Menache, Ishai, Mannor, Shie, and Shimkin, Nahum. Q-cutdynamic discovery of sub-goals in reinforcement In European Conference on Machine Learn- learning. ing, pp. 295–306. Springer, 2002. Daniel, Christian, Neumann, Gerhard, and Peters, Jan. Hi- erarchical relative entropy policy search. In Proceedings of the International Conference on Artificial Intelligence and Statistics, pp. 273–281, 2012. Neelakantan, Arvind, Le, Quoc V, and Sutskever, Ilya. Neural programmer: Inducing latent programs with gra- dient descent. arXiv preprint arXiv:1511.04834, 2015. Modular Multitask Reinforcement Learning with Policy Sketches
1611.01796#45
Modular Multitask Reinforcement Learning with Policy Sketches
We describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement them---specifically not providing the detailed guidance used by much previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motivations). To learn from sketches, we present a model that associates every subtask with a modular subpolicy, and jointly maximizes reward over full task-specific policies by tying parameters across shared subpolicies. Optimization is accomplished via a decoupled actor--critic training objective that facilitates learning common behaviors from multiple dissimilar reward functions. We evaluate the effectiveness of our approach in three environments featuring both discrete and continuous control, and with sparse rewards that can be obtained only after completing a number of high-level subgoals. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learning task-specific or shared policies, while naturally inducing a library of interpretable primitive behaviors that can be recombined to rapidly adapt to new tasks.
http://arxiv.org/pdf/1611.01796
Jacob Andreas, Dan Klein, Sergey Levine
cs.LG, cs.NE
To appear at ICML 2017
null
cs.LG
20161106
20170617
[ { "id": "1606.04695" }, { "id": "1609.07088" }, { "id": "1506.02438" }, { "id": "1511.04834" }, { "id": "1604.06057" } ]
1611.01796
46
Modular Multitask Reinforcement Learning with Policy Sketches Niekum, Scott, Osentoski, Sarah, Konidaris, George, Chitta, Sachin, Marthi, Bhaskara, and Barto, Andrew G. Learning grounded finite-state representations from un- structured demonstrations. The International Journal of Robotics Research, 34(2):131–157, 2015. Vogel, Adam and Jurafsky, Dan. Learning to follow navi- gational directions. In Proceedings of the Annual Meet- ing of the Association for Computational Linguistics, pp. 806–814. Association for Computational Linguistics, 2010. Parr, Ron and Russell, Stuart. Reinforcement learning with hierarchies of machines. In Advances in Neural Infor- mation Processing Systems, 1998. Williams, Ronald J. Simple statistical gradient-following learning. algorithms for connectionist reinforcement Machine learning, 8(3-4):229–256, 1992. Precup, Doina. Temporal abstraction in reinforcement learning. PhD thesis, 2000. Reed, Scott and de Freitas, Nando. Neural programmer- interpreters. Proceedings of the International Confer- ence on Learning Representations, 2016.
1611.01796#46
Modular Multitask Reinforcement Learning with Policy Sketches
We describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement them---specifically not providing the detailed guidance used by much previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motivations). To learn from sketches, we present a model that associates every subtask with a modular subpolicy, and jointly maximizes reward over full task-specific policies by tying parameters across shared subpolicies. Optimization is accomplished via a decoupled actor--critic training objective that facilitates learning common behaviors from multiple dissimilar reward functions. We evaluate the effectiveness of our approach in three environments featuring both discrete and continuous control, and with sparse rewards that can be obtained only after completing a number of high-level subgoals. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learning task-specific or shared policies, while naturally inducing a library of interpretable primitive behaviors that can be recombined to rapidly adapt to new tasks.
http://arxiv.org/pdf/1611.01796
Jacob Andreas, Dan Klein, Sergey Levine
cs.LG, cs.NE
To appear at ICML 2017
null
cs.LG
20161106
20170617
[ { "id": "1606.04695" }, { "id": "1609.07088" }, { "id": "1506.02438" }, { "id": "1511.04834" }, { "id": "1604.06057" } ]
1611.01796
47
Reed, Scott and de Freitas, Nando. Neural programmer- interpreters. Proceedings of the International Confer- ence on Learning Representations, 2016. Schulman, John, Moritz, Philipp, Levine, Sergey, Jordan, Michael, and Abbeel, Pieter. High-dimensional con- tinuous control using generalized advantage estimation. arXiv preprint arXiv:1506.02438, 2015a. Schulman, John, Moritz, Philipp, Levine, Sergey, Jordan, Michael, and Abbeel, Pieter. Trust region policy op- In International Conference on Machine timization. Learning, 2015b. Socher, Richard, Huval, Brody, Manning, Christopher, and Ng, Andrew. Semantic compositionality through recur- sive matrix-vector spaces. In Proceedings of the Confer- ence on Empirical Methods in Natural Language Pro- cessing, pp. 1201–1211, Jeju, Korea, 2012. Stolle, Martin and Precup, Doina. Learning options in rein- forcement learning. In International Symposium on Ab- straction, Reformulation, and Approximation, pp. 212– 223. Springer, 2002.
1611.01796#47
Modular Multitask Reinforcement Learning with Policy Sketches
We describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement them---specifically not providing the detailed guidance used by much previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motivations). To learn from sketches, we present a model that associates every subtask with a modular subpolicy, and jointly maximizes reward over full task-specific policies by tying parameters across shared subpolicies. Optimization is accomplished via a decoupled actor--critic training objective that facilitates learning common behaviors from multiple dissimilar reward functions. We evaluate the effectiveness of our approach in three environments featuring both discrete and continuous control, and with sparse rewards that can be obtained only after completing a number of high-level subgoals. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learning task-specific or shared policies, while naturally inducing a library of interpretable primitive behaviors that can be recombined to rapidly adapt to new tasks.
http://arxiv.org/pdf/1611.01796
Jacob Andreas, Dan Klein, Sergey Levine
cs.LG, cs.NE
To appear at ICML 2017
null
cs.LG
20161106
20170617
[ { "id": "1606.04695" }, { "id": "1609.07088" }, { "id": "1506.02438" }, { "id": "1511.04834" }, { "id": "1604.06057" } ]
1611.01796
48
Sutton, Richard S, Precup, Doina, and Singh, Satinder. Be- tween MDPs and semi-MDPs: A framework for tempo- ral abstraction in reinforcement learning. Artificial intel- ligence, 112(1):181–211, 1999. Tellex, Stefanie, Kollar, Thomas, Dickerson, Steven, Wal- ter, Matthew R., Banerjee, Ashis Gopal, Teller, Seth, and Roy, Nicholas. Understanding natural language com- mands for robotic navigation and mobile manipulation. In In Proceedings of the National Conference on Artifi- cial Intelligence, 2011. Tieleman, Tijmen. RMSProp (unpublished), 2012. Vezhnevets, Alexander, Mnih, Volodymyr, Agapiou, John, Osindero, Simon, Graves, Alex, Vinyals, Oriol, and Kavukcuoglu, Koray. Strategic attentive writer for learn- ing macro-actions. arXiv preprint arXiv:1606.04695, 2016. Modular Multitask Reinforcement Learning with Policy Sketches
1611.01796#48
Modular Multitask Reinforcement Learning with Policy Sketches
We describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement them---specifically not providing the detailed guidance used by much previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motivations). To learn from sketches, we present a model that associates every subtask with a modular subpolicy, and jointly maximizes reward over full task-specific policies by tying parameters across shared subpolicies. Optimization is accomplished via a decoupled actor--critic training objective that facilitates learning common behaviors from multiple dissimilar reward functions. We evaluate the effectiveness of our approach in three environments featuring both discrete and continuous control, and with sparse rewards that can be obtained only after completing a number of high-level subgoals. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learning task-specific or shared policies, while naturally inducing a library of interpretable primitive behaviors that can be recombined to rapidly adapt to new tasks.
http://arxiv.org/pdf/1611.01796
Jacob Andreas, Dan Klein, Sergey Levine
cs.LG, cs.NE
To appear at ICML 2017
null
cs.LG
20161106
20170617
[ { "id": "1606.04695" }, { "id": "1609.07088" }, { "id": "1506.02438" }, { "id": "1511.04834" }, { "id": "1604.06057" } ]
1611.01796
49
Modular Multitask Reinforcement Learning with Policy Sketches A. Tasks and Sketches The complete list of tasks, sketches, and symbols is given below. Tasks marked with an asterisk∗ are held out for the generalization experiments described in Section 4.5, but included in the multitask training experiments in Sections 4.3 and 4.4. Goal Sketch Crafting environment make plank make stick make cloth make rope make bridge make bed∗ make axe∗ make shears get gold get gem get wood get wood get grass get grass get iron get wood get wood get wood get iron get wood use toolshed use workbench use factory use toolshed get wood use toolshed use workbench use workbench get wood use workbench use factory get grass get iron get iron use factory get iron use workbench use toolshed use workbench use bridge use toolshed use axe # Maze environment room 1 room 2 room 3 room 4 room 5 room 6 room 7 room 8 room 9 room 10 left left right up up up down left right left left down down left right right right left down up # up up down down right # Cliff environment
1611.01796#49
Modular Multitask Reinforcement Learning with Policy Sketches
We describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement them---specifically not providing the detailed guidance used by much previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motivations). To learn from sketches, we present a model that associates every subtask with a modular subpolicy, and jointly maximizes reward over full task-specific policies by tying parameters across shared subpolicies. Optimization is accomplished via a decoupled actor--critic training objective that facilitates learning common behaviors from multiple dissimilar reward functions. We evaluate the effectiveness of our approach in three environments featuring both discrete and continuous control, and with sparse rewards that can be obtained only after completing a number of high-level subgoals. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learning task-specific or shared policies, while naturally inducing a library of interpretable primitive behaviors that can be recombined to rapidly adapt to new tasks.
http://arxiv.org/pdf/1611.01796
Jacob Andreas, Dan Klein, Sergey Levine
cs.LG, cs.NE
To appear at ICML 2017
null
cs.LG
20161106
20170617
[ { "id": "1606.04695" }, { "id": "1609.07088" }, { "id": "1506.02438" }, { "id": "1511.04834" }, { "id": "1604.06057" } ]
1611.01796
50
left left right up up up down left right left left down down left right right right left down up # up up down down right # Cliff environment path 0 path 1 path 2 path 3 path 4 path 5 path 6 path 7 path 8 path 9 path 10 path 11 path 12 path 13 path 14 path 15 path 16 path 17 path 18 path 19 path 20 path 21 path 22 path 23 north east south west west west north west east north east south south south south east east east north west north north west south south north east north south west north east west south south south east north east west north west west east north north west east west south south south
1611.01796#50
Modular Multitask Reinforcement Learning with Policy Sketches
We describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement them---specifically not providing the detailed guidance used by much previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motivations). To learn from sketches, we present a model that associates every subtask with a modular subpolicy, and jointly maximizes reward over full task-specific policies by tying parameters across shared subpolicies. Optimization is accomplished via a decoupled actor--critic training objective that facilitates learning common behaviors from multiple dissimilar reward functions. We evaluate the effectiveness of our approach in three environments featuring both discrete and continuous control, and with sparse rewards that can be obtained only after completing a number of high-level subgoals. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learning task-specific or shared policies, while naturally inducing a library of interpretable primitive behaviors that can be recombined to rapidly adapt to new tasks.
http://arxiv.org/pdf/1611.01796
Jacob Andreas, Dan Klein, Sergey Levine
cs.LG, cs.NE
To appear at ICML 2017
null
cs.LG
20161106
20170617
[ { "id": "1606.04695" }, { "id": "1609.07088" }, { "id": "1506.02438" }, { "id": "1511.04834" }, { "id": "1604.06057" } ]
1611.01673
0
7 1 0 2 r a M 2 ] G L . s c [ 3 v 3 7 6 1 0 . 1 1 6 1 : v i X r a Published as a conference paper at ICLR 2017 # GENERATIVE MULTI-ADVERSARIAL NETWORKS # Ishan Durugkar*, Ian Gemp*, Sridhar Mahadevan Durugkar*, Gemp*, College of Information and Computer Sciences University of Massachusetts, Amherst Amherst, MA 01060, USA {idurugkar, imgemp, mahadeva}@cs.umass.edu # ABSTRACT Generative adversarial networks (GANs) are a framework for producing a gen- erative model by way of a two-player minimax game. In this paper, we propose the Generative Multi-Adversarial Network (GMAN), a framework that extends GANs to multiple discriminators. In previous work, the successful training of GANs requires modifying the minimax objective to accelerate training early on. In contrast, GMAN can be reliably trained with the original, untampered objec- tive. We explore a number of design perspectives with the discriminator role rang- ing from formidable adversary to forgiving teacher. Image generation tasks com- paring the proposed framework to standard GANs demonstrate GMAN produces higher quality samples in a fraction of the iterations when measured by a pairwise GAM-type metric.
1611.01673#0
Generative Multi-Adversarial Networks
Generative adversarial networks (GANs) are a framework for producing a generative model by way of a two-player minimax game. In this paper, we propose the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that extends GANs to multiple discriminators. In previous work, the successful training of GANs requires modifying the minimax objective to accelerate training early on. In contrast, GMAN can be reliably trained with the original, untampered objective. We explore a number of design perspectives with the discriminator role ranging from formidable adversary to forgiving teacher. Image generation tasks comparing the proposed framework to standard GANs demonstrate GMAN produces higher quality samples in a fraction of the iterations when measured by a pairwise GAM-type metric.
http://arxiv.org/pdf/1611.01673
Ishan Durugkar, Ian Gemp, Sridhar Mahadevan
cs.LG, cs.MA, cs.NE
Accepted as a conference paper (poster) at ICLR 2017
null
cs.LG
20161105
20170302
[ { "id": "1511.06390" }, { "id": "1511.05897" }, { "id": "1610.02920" } ]
1611.01576
1
# ABSTRACT Recurrent neural networks are a powerful tool for modeling sequential data, but the dependence of each timestep’s computation on the previous timestep’s out- put limits parallelism and makes RNNs unwieldy for very long sequences. We introduce quasi-recurrent neural networks (QRNNs), an approach to neural se- quence modeling that alternates convolutional layers, which apply in parallel across timesteps, and a minimalist recurrent pooling function that applies in par- allel across channels. Despite lacking trainable recurrent layers, stacked QRNNs have better predictive accuracy than stacked LSTMs of the same hidden size. Due to their increased parallelism, they are up to 16 times faster at train and test time. Experiments on language modeling, sentiment classification, and character-level neural machine translation demonstrate these advantages and underline the viabil- ity of QRNNs as a basic building block for a variety of sequence tasks. # INTRODUCTION
1611.01576#1
Quasi-Recurrent Neural Networks
Recurrent neural networks are a powerful tool for modeling sequential data, but the dependence of each timestep's computation on the previous timestep's output limits parallelism and makes RNNs unwieldy for very long sequences. We introduce quasi-recurrent neural networks (QRNNs), an approach to neural sequence modeling that alternates convolutional layers, which apply in parallel across timesteps, and a minimalist recurrent pooling function that applies in parallel across channels. Despite lacking trainable recurrent layers, stacked QRNNs have better predictive accuracy than stacked LSTMs of the same hidden size. Due to their increased parallelism, they are up to 16 times faster at train and test time. Experiments on language modeling, sentiment classification, and character-level neural machine translation demonstrate these advantages and underline the viability of QRNNs as a basic building block for a variety of sequence tasks.
http://arxiv.org/pdf/1611.01576
James Bradbury, Stephen Merity, Caiming Xiong, Richard Socher
cs.NE, cs.AI, cs.CL, cs.LG
Submitted to conference track at ICLR 2017
null
cs.NE
20161105
20161121
[ { "id": "1605.07725" }, { "id": "1508.06615" }, { "id": "1606.01305" }, { "id": "1610.10099" }, { "id": "1609.08144" }, { "id": "1602.00367" }, { "id": "1511.08630" }, { "id": "1609.07843" }, { "id": "1608.06993" }, { "id": "1610.03017" }, { "id": "1601.06759" }, { "id": "1606.02960" } ]
1611.01578
1
# ABSTRACT Neural networks are powerful and flexible models that work well for many diffi- cult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a re- current network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that out- performs the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplex- ity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214. # INTRODUCTION
1611.01578#1
Neural Architecture Search with Reinforcement Learning
Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214.
http://arxiv.org/pdf/1611.01578
Barret Zoph, Quoc V. Le
cs.LG, cs.AI, cs.NE
null
null
cs.LG
20161105
20170215
[ { "id": "1611.01462" }, { "id": "1607.03474" }, { "id": "1603.05027" }, { "id": "1609.09106" }, { "id": "1511.06732" }, { "id": "1508.06615" }, { "id": "1606.04474" }, { "id": "1608.05859" }, { "id": "1609.08144" }, { "id": "1606.01885" }, { "id": "1505.00387" }, { "id": "1609.07843" }, { "id": "1512.05287" }, { "id": "1603.09382" }, { "id": "1608.06993" }, { "id": "1605.07648" } ]
1611.01600
1
# ABSTRACT Deep neural network models, though very powerful and highly successful, are computationally expensive in terms of space and time. Recently, there have been a number of attempts on binarizing the network weights and activations. This greatly reduces the network size, and replaces the underlying multiplications to additions or even XNOR bit operations. However, existing binarization schemes are based on simple matrix approximations and ignore the effect of binarization on the loss. In this paper, we propose a proximal Newton algorithm with diag- onal Hessian approximation that directly minimizes the loss w.r.t. the binarized weights. The underlying proximal step has an efficient closed-form solution, and the second-order information can be efficiently obtained from the second moments already computed by the Adam optimizer. Experiments on both feedforward and recurrent networks show that the proposed loss-aware binarization algorithm out- performs existing binarization schemes, and is also more robust for wide and deep networks. # INTRODUCTION
1611.01600#1
Loss-aware Binarization of Deep Networks
Deep neural network models, though very powerful and highly successful, are computationally expensive in terms of space and time. Recently, there have been a number of attempts on binarizing the network weights and activations. This greatly reduces the network size, and replaces the underlying multiplications to additions or even XNOR bit operations. However, existing binarization schemes are based on simple matrix approximation and ignore the effect of binarization on the loss. In this paper, we propose a proximal Newton algorithm with diagonal Hessian approximation that directly minimizes the loss w.r.t. the binarized weights. The underlying proximal step has an efficient closed-form solution, and the second-order information can be efficiently obtained from the second moments already computed by the Adam optimizer. Experiments on both feedforward and recurrent networks show that the proposed loss-aware binarization algorithm outperforms existing binarization schemes, and is also more robust for wide and deep networks.
http://arxiv.org/pdf/1611.01600
Lu Hou, Quanming Yao, James T. Kwok
cs.NE, cs.LG
null
null
cs.NE
20161105
20180510
[ { "id": "1605.04711" }, { "id": "1606.06160" }, { "id": "1502.04390" } ]
1611.01603
1
# ABSTRACT Machine comprehension (MC), answering a query about a given context para- graph, requires modeling complex interactions between the context and the query. Recently, attention mechanisms have been successfully extended to MC. Typ- ically these methods use attention to focus on a small portion of the con- text and summarize it with a fixed-size vector, couple attentions temporally, In this paper we introduce the and/or often form a uni-directional attention. Bi-Directional Attention Flow (BIDAF) network, a multi-stage hierarchical pro- cess that represents the context at different levels of granularity and uses bi- directional attention flow mechanism to obtain a query-aware context represen- tation without early summarization. Our experimental evaluations show that our model achieves the state-of-the-art results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze test. # INTRODUCTION
1611.01603#1
Bidirectional Attention Flow for Machine Comprehension
Machine comprehension (MC), answering a query about a given context paragraph, requires modeling complex interactions between the context and the query. Recently, attention mechanisms have been successfully extended to MC. Typically these methods use attention to focus on a small portion of the context and summarize it with a fixed-size vector, couple attentions temporally, and/or often form a uni-directional attention. In this paper we introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage hierarchical process that represents the context at different levels of granularity and uses bi-directional attention flow mechanism to obtain a query-aware context representation without early summarization. Our experimental evaluations show that our model achieves the state-of-the-art results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze test.
http://arxiv.org/pdf/1611.01603
Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi
cs.CL
Published as a conference paper at ICLR 2017
null
cs.CL
20161105
20180621
[ { "id": "1606.02245" }, { "id": "1608.07905" }, { "id": "1611.01604" }, { "id": "1609.05284" }, { "id": "1610.09996" }, { "id": "1606.01549" }, { "id": "1511.02274" }, { "id": "1505.00387" }, { "id": "1611.01436" }, { "id": "1611.01724" }, { "id": "1607.04423" } ]
1611.01626
1
# ABSTRACT Policy gradient is an efficient technique for improving a policy in a reinforcement learning setting. However, vanilla online variants are on-policy only and not able to take advantage of off-policy data. In this paper we describe a new technique that combines policy gradient with off-policy Q-learning, drawing experience from a replay buffer. This is motivated by making a connection between the fixed points of the regularized policy gradient algorithm and the Q-values. This connection allows us to estimate the Q-values from the action preferences of the policy, to which we apply Q-learning updates. We refer to the new technique as ‘PGQL’, for policy gradient and Q-learning. We also establish an equivalency between action-value fitting techniques and actor-critic algorithms, showing that regular- ized policy gradient techniques can be interpreted as advantage function learning algorithms. We conclude with some numerical examples that demonstrate im- proved data efficiency and stability of PGQL. In particular, we tested PGQL on the full suite of Atari games and achieved performance exceeding that of both asynchronous advantage actor-critic (A3C) and Q-learning. # INTRODUCTION
1611.01626#1
Combining policy gradient and Q-learning
Policy gradient is an efficient technique for improving a policy in a reinforcement learning setting. However, vanilla online variants are on-policy only and not able to take advantage of off-policy data. In this paper we describe a new technique that combines policy gradient with off-policy Q-learning, drawing experience from a replay buffer. This is motivated by making a connection between the fixed points of the regularized policy gradient algorithm and the Q-values. This connection allows us to estimate the Q-values from the action preferences of the policy, to which we apply Q-learning updates. We refer to the new technique as 'PGQL', for policy gradient and Q-learning. We also establish an equivalency between action-value fitting techniques and actor-critic algorithms, showing that regularized policy gradient techniques can be interpreted as advantage function learning algorithms. We conclude with some numerical examples that demonstrate improved data efficiency and stability of PGQL. In particular, we tested PGQL on the full suite of Atari games and achieved performance exceeding that of both asynchronous advantage actor-critic (A3C) and Q-learning.
http://arxiv.org/pdf/1611.01626
Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih
cs.LG, cs.AI, math.OC, stat.ML
null
null
cs.LG
20161105
20170407
[ { "id": "1602.01783" }, { "id": "1509.02971" }, { "id": "1609.00150" }, { "id": "1512.04105" }, { "id": "1511.05952" }, { "id": "1504.00702" } ]
1611.01673
1
# 1 INTRODUCTION Generative adversarial networks (Goodfellow et al. (2014)) (GANs) are a framework for producing generative model by way of a two-player minimax game. One player, the generator, attempts to generate realistic data samples by transforming noisy samples, z, drawn from a simple distribution (e.g., z ~ N(0, 1)) using a transformation function Gg(z) with learned weights, 9. The generator receives feedback as to how realistic its synthetic sample is from another player, the discriminator, which attempts to discern between synthetic data samples produced by the generator and samples drawn from an actual dataset using a function D,,(«) with learned weights, w. # a
1611.01673#1
Generative Multi-Adversarial Networks
Generative adversarial networks (GANs) are a framework for producing a generative model by way of a two-player minimax game. In this paper, we propose the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that extends GANs to multiple discriminators. In previous work, the successful training of GANs requires modifying the minimax objective to accelerate training early on. In contrast, GMAN can be reliably trained with the original, untampered objective. We explore a number of design perspectives with the discriminator role ranging from formidable adversary to forgiving teacher. Image generation tasks comparing the proposed framework to standard GANs demonstrate GMAN produces higher quality samples in a fraction of the iterations when measured by a pairwise GAM-type metric.
http://arxiv.org/pdf/1611.01673
Ishan Durugkar, Ian Gemp, Sridhar Mahadevan
cs.LG, cs.MA, cs.NE
Accepted as a conference paper (poster) at ICLR 2017
null
cs.LG
20161105
20170302
[ { "id": "1511.06390" }, { "id": "1511.05897" }, { "id": "1610.02920" } ]
1611.01576
2
# INTRODUCTION Recurrent neural networks (RNNs), including gated variants such as the long short-term memory (LSTM) (Hochreiter & Schmidhuber, 1997) have become the standard model architecture for deep learning approaches to sequence modeling tasks. RNNs repeatedly apply a function with trainable parameters to a hidden state. Recurrent layers can also be stacked, increasing network depth, repre- sentational power and often accuracy. RNN applications in the natural language domain range from sentence classification (Wang et al., 2015) to word- and character-level language modeling (Zaremba et al., 2014). RNNs are also commonly the basic building block for more complex models for tasks such as machine translation (Bahdanau et al., 2015; Luong et al., 2015; Bradbury & Socher, 2016) or question answering (Kumar et al., 2016; Xiong et al., 2016). Unfortunately standard RNNs, in- cluding LSTMs, are limited in their capability to handle tasks involving very long sequences, such as document classification or character-level machine translation, as the computation of features or states for different parts of the document cannot occur in parallel.
1611.01576#2
Quasi-Recurrent Neural Networks
Recurrent neural networks are a powerful tool for modeling sequential data, but the dependence of each timestep's computation on the previous timestep's output limits parallelism and makes RNNs unwieldy for very long sequences. We introduce quasi-recurrent neural networks (QRNNs), an approach to neural sequence modeling that alternates convolutional layers, which apply in parallel across timesteps, and a minimalist recurrent pooling function that applies in parallel across channels. Despite lacking trainable recurrent layers, stacked QRNNs have better predictive accuracy than stacked LSTMs of the same hidden size. Due to their increased parallelism, they are up to 16 times faster at train and test time. Experiments on language modeling, sentiment classification, and character-level neural machine translation demonstrate these advantages and underline the viability of QRNNs as a basic building block for a variety of sequence tasks.
http://arxiv.org/pdf/1611.01576
James Bradbury, Stephen Merity, Caiming Xiong, Richard Socher
cs.NE, cs.AI, cs.CL, cs.LG
Submitted to conference track at ICLR 2017
null
cs.NE
20161105
20161121
[ { "id": "1605.07725" }, { "id": "1508.06615" }, { "id": "1606.01305" }, { "id": "1610.10099" }, { "id": "1609.08144" }, { "id": "1602.00367" }, { "id": "1511.08630" }, { "id": "1609.07843" }, { "id": "1608.06993" }, { "id": "1610.03017" }, { "id": "1601.06759" }, { "id": "1606.02960" } ]
1611.01578
2
# INTRODUCTION The last few years have seen much success of deep neural networks in many challenging appli- cations, such as speech recognition (Hinton et al., 2012), image recognition (LeCun et al., 1998; Krizhevsky et al., 2012) and machine translation (Sutskever et al., 2014; Bahdanau et al., 2015; Wu et al., 2016). Along with this success is a paradigm shift from feature designing to architecture designing, i.e., from SIFT (Lowe, 1999), and HOG (Dalal & Triggs, 2005), to AlexNet (Krizhevsky et al., 2012), VGGNet (Simonyan & Zisserman, 2014), GoogleNet (Szegedy et al., 2015), and ResNet (He et al., 2016a). Although it has become easier, designing architectures still requires a lot of expert knowledge and takes ample time. Sample architecture A with probability p Trains a child network The controller (RNN) with architecture Ato get accuracy R Compute gradient of p and scale it by R to update the controller Figure 1: An overview of Neural Architecture Search. This paper presents Neural Architecture Search, a gradient-based method for finding good architec- tures (see Figure 1) . Our work is based on the observation that the structure and connectivity of a
1611.01578#2
Neural Architecture Search with Reinforcement Learning
Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214.
http://arxiv.org/pdf/1611.01578
Barret Zoph, Quoc V. Le
cs.LG, cs.AI, cs.NE
null
null
cs.LG
20161105
20170215
[ { "id": "1611.01462" }, { "id": "1607.03474" }, { "id": "1603.05027" }, { "id": "1609.09106" }, { "id": "1511.06732" }, { "id": "1508.06615" }, { "id": "1606.04474" }, { "id": "1608.05859" }, { "id": "1609.08144" }, { "id": "1606.01885" }, { "id": "1505.00387" }, { "id": "1609.07843" }, { "id": "1512.05287" }, { "id": "1603.09382" }, { "id": "1608.06993" }, { "id": "1605.07648" } ]
1611.01600
2
# INTRODUCTION Recently, deep neural networks have achieved state-of-the-art performance in various tasks such as speech recognition, visual object recognition, and image classification (LeCun et al., 2015). Though powerful, the large number of network weights leads to space and time inefficiencies in both training and storage. For instance, the popular AlexNet, VGG-16 and Resnet-18 all require hundred of megabytes to store, and billions of high-precision operations on classification. This limits its use in embedded systems, smart phones and other portable devices that are now everywhere. To alleviate this problem, a number of approaches have been recently proposed. One attempt first trains a neural network and then compresses it (Han et al., 2016; Kim et al., 2016). Instead of this two-step approach, it is more desirable to train and compress the network simultaneously. Example approaches include tensorizing (Novikov et al., 2015), parameter quantization (Gong et al., 2014), and binarization (Courbariaux et al., 2015; Hubara et al., 2016; Rastegari et al., 2016). In particular, binarization only requires one bit for each weight value. This can significantly reduce storage, and also eliminates most multiplications during the forward pass.
1611.01600#2
Loss-aware Binarization of Deep Networks
Deep neural network models, though very powerful and highly successful, are computationally expensive in terms of space and time. Recently, there have been a number of attempts on binarizing the network weights and activations. This greatly reduces the network size, and replaces the underlying multiplications to additions or even XNOR bit operations. However, existing binarization schemes are based on simple matrix approximation and ignore the effect of binarization on the loss. In this paper, we propose a proximal Newton algorithm with diagonal Hessian approximation that directly minimizes the loss w.r.t. the binarized weights. The underlying proximal step has an efficient closed-form solution, and the second-order information can be efficiently obtained from the second moments already computed by the Adam optimizer. Experiments on both feedforward and recurrent networks show that the proposed loss-aware binarization algorithm outperforms existing binarization schemes, and is also more robust for wide and deep networks.
http://arxiv.org/pdf/1611.01600
Lu Hou, Quanming Yao, James T. Kwok
cs.NE, cs.LG
null
null
cs.NE
20161105
20180510
[ { "id": "1605.04711" }, { "id": "1606.06160" }, { "id": "1502.04390" } ]
1611.01603
2
# INTRODUCTION The tasks of machine comprehension (MC) and question answering (QA) have gained significant popularity over the past few years within the natural language processing and computer vision com- munities. Systems trained end-to-end now achieve promising results on a variety of tasks in the text and image domains. One of the key factors to the advancement has been the use of neural attention mechanism, which enables the system to focus on a targeted area within a context paragraph (for MC) or within an image (for Visual QA), that is most relevant to answer the question (Weston et al., 2015; Antol et al., 2015; Xiong et al., 2016a). Attention mechanisms in previous works typically have one or more of the following characteristics. First, the computed attention weights are often used to extract the most relevant information from the context for answering the question by sum- marizing the context into a fixed-size vector. Second, in the text domain, they are often temporally dynamic, whereby the attention weights at the current time step are a function of the attended vector at the previous time step. Third, they are usually uni-directional, wherein the query attends on the context paragraph or the image.
1611.01603#2
Bidirectional Attention Flow for Machine Comprehension
Machine comprehension (MC), answering a query about a given context paragraph, requires modeling complex interactions between the context and the query. Recently, attention mechanisms have been successfully extended to MC. Typically these methods use attention to focus on a small portion of the context and summarize it with a fixed-size vector, couple attentions temporally, and/or often form a uni-directional attention. In this paper we introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage hierarchical process that represents the context at different levels of granularity and uses bi-directional attention flow mechanism to obtain a query-aware context representation without early summarization. Our experimental evaluations show that our model achieves the state-of-the-art results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze test.
http://arxiv.org/pdf/1611.01603
Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi
cs.CL
Published as a conference paper at ICLR 2017
null
cs.CL
20161105
20180621
[ { "id": "1606.02245" }, { "id": "1608.07905" }, { "id": "1611.01604" }, { "id": "1609.05284" }, { "id": "1610.09996" }, { "id": "1606.01549" }, { "id": "1511.02274" }, { "id": "1505.00387" }, { "id": "1611.01436" }, { "id": "1611.01724" }, { "id": "1607.04423" } ]
1611.01626
2
# INTRODUCTION In reinforcement learning an agent explores an environment and through the use of a reward signal learns to optimize its behavior to maximize the expected long-term return. Reinforcement learning has seen success in several areas including robotics (Lin, 1993; Levine et al., 2015), computer games (Mnih et al., 2013; 2015), online advertising (Pednault et al., 2002), board games (Tesauro, 1995; Silver et al., 2016), and many others. For an introduction to reinforcement learning we refer to the classic text by Sutton & Barto (1998). In this paper we consider model-free reinforcement learning, where the state-transition function is not known or learned. There are many different algorithms for model-free reinforcement learning, but most fall into one of two families: action-value fitting and policy gradient techniques.
1611.01626#2
Combining policy gradient and Q-learning
Policy gradient is an efficient technique for improving a policy in a reinforcement learning setting. However, vanilla online variants are on-policy only and not able to take advantage of off-policy data. In this paper we describe a new technique that combines policy gradient with off-policy Q-learning, drawing experience from a replay buffer. This is motivated by making a connection between the fixed points of the regularized policy gradient algorithm and the Q-values. This connection allows us to estimate the Q-values from the action preferences of the policy, to which we apply Q-learning updates. We refer to the new technique as 'PGQL', for policy gradient and Q-learning. We also establish an equivalency between action-value fitting techniques and actor-critic algorithms, showing that regularized policy gradient techniques can be interpreted as advantage function learning algorithms. We conclude with some numerical examples that demonstrate improved data efficiency and stability of PGQL. In particular, we tested PGQL on the full suite of Atari games and achieved performance exceeding that of both asynchronous advantage actor-critic (A3C) and Q-learning.
http://arxiv.org/pdf/1611.01626
Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih
cs.LG, cs.AI, math.OC, stat.ML
null
null
cs.LG
20161105
20170407
[ { "id": "1602.01783" }, { "id": "1509.02971" }, { "id": "1609.00150" }, { "id": "1512.04105" }, { "id": "1511.05952" }, { "id": "1504.00702" } ]
1611.01673
2
# a The GAN framework is one of the more recent successes in a line of research on adversarial train- ing in machine learning (Schmidhuber (1992); Bagnell (2005); Ajakan et al. (2014)) where games between learners are carefully crafted so that Nash equilibria coincide with some set of desired op- timality criteria. Preliminary work on GANs focused on generating images (e.g., MNIST (LeCun et al. (1998)), CIFAR (Krizhevsky (2009))), however, GANs have proven useful in a variety of appli- cation domains including learning censored representations (Edwards & Storkey (2015)), imitating expert policies (Ho & Ermon (2016)), and domain transfer (Yoo et al. (2016)). Work extending GANs to semi-supervised learning (Chen et al. (2016); Mirza & Osindero (2014); Gauthier (2014); Springenberg (2015)), inference (Makhzani et al. (2015); Dumoulin et al. (2016)), feature learning (Donahue et al. (2016)), and improved image generation (Im et al. (2016); Denton et al. (2015); Radford et al. (2015)) have shown promise as well.
1611.01673#2
Generative Multi-Adversarial Networks
Generative adversarial networks (GANs) are a framework for producing a generative model by way of a two-player minimax game. In this paper, we propose the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that extends GANs to multiple discriminators. In previous work, the successful training of GANs requires modifying the minimax objective to accelerate training early on. In contrast, GMAN can be reliably trained with the original, untampered objective. We explore a number of design perspectives with the discriminator role ranging from formidable adversary to forgiving teacher. Image generation tasks comparing the proposed framework to standard GANs demonstrate GMAN produces higher quality samples in a fraction of the iterations when measured by a pairwise GAM-type metric.
http://arxiv.org/pdf/1611.01673
Ishan Durugkar, Ian Gemp, Sridhar Mahadevan
cs.LG, cs.MA, cs.NE
Accepted as a conference paper (poster) at ICLR 2017
null
cs.LG
20161105
20170302
[ { "id": "1511.06390" }, { "id": "1511.05897" }, { "id": "1610.02920" } ]
1611.01576
3
Convolutional neural networks (CNNs) (Krizhevsky et al., 2012), though more popular on tasks in- volving image data, have also been applied to sequence encoding tasks (Zhang et al., 2015). Such models apply time-invariant filter functions in parallel to windows along the input sequence. CNNs possess several advantages over recurrent models, including increased parallelism and better scal- ing to long sequences such as those often seen with character-level language data. Convolutional models for sequence processing have been more successful when combined with RNN layers in a hybrid architecture (Lee et al., 2016), because traditional max- and average-pooling approaches to combining convolutional features across timesteps assume time invariance and hence cannot make full use of large-scale sequence order information.
1611.01576#3
Quasi-Recurrent Neural Networks
Recurrent neural networks are a powerful tool for modeling sequential data, but the dependence of each timestep's computation on the previous timestep's output limits parallelism and makes RNNs unwieldy for very long sequences. We introduce quasi-recurrent neural networks (QRNNs), an approach to neural sequence modeling that alternates convolutional layers, which apply in parallel across timesteps, and a minimalist recurrent pooling function that applies in parallel across channels. Despite lacking trainable recurrent layers, stacked QRNNs have better predictive accuracy than stacked LSTMs of the same hidden size. Due to their increased parallelism, they are up to 16 times faster at train and test time. Experiments on language modeling, sentiment classification, and character-level neural machine translation demonstrate these advantages and underline the viability of QRNNs as a basic building block for a variety of sequence tasks.
http://arxiv.org/pdf/1611.01576
James Bradbury, Stephen Merity, Caiming Xiong, Richard Socher
cs.NE, cs.AI, cs.CL, cs.LG
Submitted to conference track at ICLR 2017
null
cs.NE
20161105
20161121
[ { "id": "1605.07725" }, { "id": "1508.06615" }, { "id": "1606.01305" }, { "id": "1610.10099" }, { "id": "1609.08144" }, { "id": "1602.00367" }, { "id": "1511.08630" }, { "id": "1609.07843" }, { "id": "1608.06993" }, { "id": "1610.03017" }, { "id": "1601.06759" }, { "id": "1606.02960" } ]
1611.01578
3
∗Work done as a member of the Google Brain Residency program (g.co/brainresidency.) 1 # Under review as a conference paper at ICLR 2017 neural network can be typically specified by a variable-length string. It is therefore possible to use a recurrent network – the controller – to generate such string. Training the network specified by the string – the “child network” – on the real data will result in an accuracy on a validation set. Using this accuracy as the reward signal, we can compute the policy gradient to update the controller. As a result, in the next iteration, the controller will give higher probabilities to architectures that receive high accuracies. In other words, the controller will learn to improve its search over time.
1611.01578#3
Neural Architecture Search with Reinforcement Learning
Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214.
http://arxiv.org/pdf/1611.01578
Barret Zoph, Quoc V. Le
cs.LG, cs.AI, cs.NE
null
null
cs.LG
20161105
20170215
[ { "id": "1611.01462" }, { "id": "1607.03474" }, { "id": "1603.05027" }, { "id": "1609.09106" }, { "id": "1511.06732" }, { "id": "1508.06615" }, { "id": "1606.04474" }, { "id": "1608.05859" }, { "id": "1609.08144" }, { "id": "1606.01885" }, { "id": "1505.00387" }, { "id": "1609.07843" }, { "id": "1512.05287" }, { "id": "1603.09382" }, { "id": "1608.06993" }, { "id": "1605.07648" } ]
1611.01600
3
Courbariaux et al. (2015) pioneered neural network binarization with the BinaryConnect algorithm, which achieves state-of-the-art results on many classification tasks. Besides binarizing the weights, Hubara et al. (2016) further binarized the activations. Rastegari et al. (2016) also learned to scale the binarized weights, and obtained better results. Besides, they proposed the XNOR-network with both weights and activations binarized as in (Hubara et al., 2016). Instead of binarization, ternary-connect quantizes each weight to {−1, 0, 1} (Lin et al., 2016). Similarly, the ternary weight network (Li & Liu, 2016) and DoReFa-net (Zhou et al., 2016) quantize weights to three levels or more. However, though using more bits allows more accurate weight approximations, specialized hardwares are needed for the underlying non-binary operations.
1611.01600#3
Loss-aware Binarization of Deep Networks
Deep neural network models, though very powerful and highly successful, are computationally expensive in terms of space and time. Recently, there have been a number of attempts on binarizing the network weights and activations. This greatly reduces the network size, and replaces the underlying multiplications to additions or even XNOR bit operations. However, existing binarization schemes are based on simple matrix approximation and ignore the effect of binarization on the loss. In this paper, we propose a proximal Newton algorithm with diagonal Hessian approximation that directly minimizes the loss w.r.t. the binarized weights. The underlying proximal step has an efficient closed-form solution, and the second-order information can be efficiently obtained from the second moments already computed by the Adam optimizer. Experiments on both feedforward and recurrent networks show that the proposed loss-aware binarization algorithm outperforms existing binarization schemes, and is also more robust for wide and deep networks.
http://arxiv.org/pdf/1611.01600
Lu Hou, Quanming Yao, James T. Kwok
cs.NE, cs.LG
null
null
cs.NE
20161105
20180510
[ { "id": "1605.04711" }, { "id": "1606.06160" }, { "id": "1502.04390" } ]
1611.01603
3
In this paper, we introduce the Bi-Directional Attention Flow (BIDAF) network, a hierarchical multi-stage architecture for modeling the representations of the context paragraph at different levels of granularity (Figure 1). BIDAF includes character-level, word-level, and contextual embeddings, and uses bi-directional attention flow to obtain a query-aware context representation. Our attention mechanism offers following improvements to the previously popular attention paradigms. First, our attention layer is not used to summarize the context paragraph into a fixed-size vector. Instead, the attention is computed for every time step, and the attended vector at each time step, along with the representations from previous layers, is allowed to flow through to the subsequent modeling layer. This reduces the information loss caused by early summarization. Second, we use a memory-less attention mechanism. That is, while we iteratively compute attention through time as in Bahdanau et al. (2015), the attention at each time step is a function of only the query and the context para- graph at the current time step and does not directly depend on the attention at the previous time step. We hypothesize that this
1611.01603#3
Bidirectional Attention Flow for Machine Comprehension
Machine comprehension (MC), answering a query about a given context paragraph, requires modeling complex interactions between the context and the query. Recently, attention mechanisms have been successfully extended to MC. Typically these methods use attention to focus on a small portion of the context and summarize it with a fixed-size vector, couple attentions temporally, and/or often form a uni-directional attention. In this paper we introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage hierarchical process that represents the context at different levels of granularity and uses bi-directional attention flow mechanism to obtain a query-aware context representation without early summarization. Our experimental evaluations show that our model achieves the state-of-the-art results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze test.
http://arxiv.org/pdf/1611.01603
Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi
cs.CL
Published as a conference paper at ICLR 2017
null
cs.CL
20161105
20180621
[ { "id": "1606.02245" }, { "id": "1608.07905" }, { "id": "1611.01604" }, { "id": "1609.05284" }, { "id": "1610.09996" }, { "id": "1606.01549" }, { "id": "1511.02274" }, { "id": "1505.00387" }, { "id": "1611.01436" }, { "id": "1611.01724" }, { "id": "1607.04423" } ]
1611.01626
3
Action-value techniques involve fitting a function, called the Q-values, that captures the expected return for taking a particular action at a particular state, and then following a particular policy there- after. Two alternatives we discuss in this paper are SARSA (Rummery & Niranjan, 1994) and Q-learning (Watkins, 1989), although there are many others. SARSA is an on-policy algorithm whereby the action-value function is fit to the current policy, which is then refined by being mostly greedy with respect to those action-values. On the other hand, Q-learning attempts to find the Q- values associated with the optimal policy directly and does not fit to the policy that was used to generate the data. Q-learning is an off-policy algorithm that can use data generated by another agent or from a replay buffer of old experience. Under certain conditions both SARSA and Q-learning can be shown to converge to the optimal Q-values, from which we can derive the optimal policy (Sutton, 1988; Bertsekas & Tsitsiklis, 1996).
1611.01626#3
Combining policy gradient and Q-learning
Policy gradient is an efficient technique for improving a policy in a reinforcement learning setting. However, vanilla online variants are on-policy only and not able to take advantage of off-policy data. In this paper we describe a new technique that combines policy gradient with off-policy Q-learning, drawing experience from a replay buffer. This is motivated by making a connection between the fixed points of the regularized policy gradient algorithm and the Q-values. This connection allows us to estimate the Q-values from the action preferences of the policy, to which we apply Q-learning updates. We refer to the new technique as 'PGQL', for policy gradient and Q-learning. We also establish an equivalency between action-value fitting techniques and actor-critic algorithms, showing that regularized policy gradient techniques can be interpreted as advantage function learning algorithms. We conclude with some numerical examples that demonstrate improved data efficiency and stability of PGQL. In particular, we tested PGQL on the full suite of Atari games and achieved performance exceeding that of both asynchronous advantage actor-critic (A3C) and Q-learning.
http://arxiv.org/pdf/1611.01626
Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih
cs.LG, cs.AI, math.OC, stat.ML
null
null
cs.LG
20161105
20170407
[ { "id": "1602.01783" }, { "id": "1509.02971" }, { "id": "1609.00150" }, { "id": "1512.04105" }, { "id": "1511.05952" }, { "id": "1504.00702" } ]
1611.01673
3
Despite these successes, GANs are reputably difficult to train. While research is still underway to improve training techniques and heuristics (Salimans et al. (2016)), most approaches have focused on understanding and generalizing GANs theoretically with the aim of exploring more tractable formulations (Zhao et al. (2016); Li et al. (2015); Uehara et al. (2016); Nowozin et al. (2016)). In this paper, we theoretically and empirically justify generalizing the GAN framework to multiple discriminators. We review GANs and summarize our extension in Section 2. In Sections 3 and 4, we present our V-discriminator extension to the GAN framework (Generative Multi-Adversarial Networks) with several variants which range the role of the discriminator from formidable adversary to forgiving teacher. Section 4.2 explains how this extension makes training with the untampered minimax objective tractable. In Section 5, we define an intuitive metric (GMAM) to quantify GMAN *Equal contribution Published as a conference paper at ICLR 2017 performance and evaluate our framework on a variety of image generation tasks. Section 6 concludes with a summary of our contributions and directions for future research.
1611.01673#3
Generative Multi-Adversarial Networks
Generative adversarial networks (GANs) are a framework for producing a generative model by way of a two-player minimax game. In this paper, we propose the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that extends GANs to multiple discriminators. In previous work, the successful training of GANs requires modifying the minimax objective to accelerate training early on. In contrast, GMAN can be reliably trained with the original, untampered objective. We explore a number of design perspectives with the discriminator role ranging from formidable adversary to forgiving teacher. Image generation tasks comparing the proposed framework to standard GANs demonstrate GMAN produces higher quality samples in a fraction of the iterations when measured by a pairwise GAM-type metric.
http://arxiv.org/pdf/1611.01673
Ishan Durugkar, Ian Gemp, Sridhar Mahadevan
cs.LG, cs.MA, cs.NE
Accepted as a conference paper (poster) at ICLR 2017
null
cs.LG
20161105
20170302
[ { "id": "1511.06390" }, { "id": "1511.05897" }, { "id": "1610.02920" } ]
1611.01576
4
We present quasi-recurrent neural networks for neural sequence modeling. QRNNs address both drawbacks of standard models: like CNNs, QRNNs allow for parallel computation across both timestep and minibatch dimensions, enabling high throughput and good scaling to long sequences. Like RNNs, QRNNs allow the output to depend on the overall order of elements in the sequence. We describe QRNN variants tailored to several natural language tasks, including document-level sentiment classification, language modeling, and character-level machine translation. These models outperform strong LSTM baselines on all three tasks while dramatically reducing computation time. # ∗Equal contribution 1 # Under review as a conference paper at ICLR 2017 LSTM CNN QRNN 4 Lrcor TT conciuton TTT — convixicn iain LSTM/Linear —+{_}+{-_}--L }- Max-Pool fo-Pol [ELS = === >| 7 Linear _ Convolution Convolution -____ LSTM/Linear —-{_ ++} HL }- Max-Pool iw fo-Pol §=$[2_ = > t ¥ t iu iu iu
1611.01576#4
Quasi-Recurrent Neural Networks
Recurrent neural networks are a powerful tool for modeling sequential data, but the dependence of each timestep's computation on the previous timestep's output limits parallelism and makes RNNs unwieldy for very long sequences. We introduce quasi-recurrent neural networks (QRNNs), an approach to neural sequence modeling that alternates convolutional layers, which apply in parallel across timesteps, and a minimalist recurrent pooling function that applies in parallel across channels. Despite lacking trainable recurrent layers, stacked QRNNs have better predictive accuracy than stacked LSTMs of the same hidden size. Due to their increased parallelism, they are up to 16 times faster at train and test time. Experiments on language modeling, sentiment classification, and character-level neural machine translation demonstrate these advantages and underline the viability of QRNNs as a basic building block for a variety of sequence tasks.
http://arxiv.org/pdf/1611.01576
James Bradbury, Stephen Merity, Caiming Xiong, Richard Socher
cs.NE, cs.AI, cs.CL, cs.LG
Submitted to conference track at ICLR 2017
null
cs.NE
20161105
20161121
[ { "id": "1605.07725" }, { "id": "1508.06615" }, { "id": "1606.01305" }, { "id": "1610.10099" }, { "id": "1609.08144" }, { "id": "1602.00367" }, { "id": "1511.08630" }, { "id": "1609.07843" }, { "id": "1608.06993" }, { "id": "1610.03017" }, { "id": "1601.06759" }, { "id": "1606.02960" } ]
1611.01578
4
Our experiments show that Neural Architecture Search can design good models from scratch, an achievement considered not possible with other methods. On image recognition with CIFAR-10, Neural Architecture Search can find a novel ConvNet model that is better than most human-invented architectures. Our CIFAR-10 model achieves a 3.65 test set error, while being 1.05x faster than the current best model. On language modeling with Penn Treebank, Neural Architecture Search can design a novel recurrent cell that is also better than previous RNN and LSTM architectures. The cell that our model found achieves a test set perplexity of 62.4 on the Penn Treebank dataset, which is 3.6 perplexity better than the previous state-of-the-art. # 2 RELATED WORK
1611.01578#4
Neural Architecture Search with Reinforcement Learning
Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214.
http://arxiv.org/pdf/1611.01578
Barret Zoph, Quoc V. Le
cs.LG, cs.AI, cs.NE
null
null
cs.LG
20161105
20170215
[ { "id": "1611.01462" }, { "id": "1607.03474" }, { "id": "1603.05027" }, { "id": "1609.09106" }, { "id": "1511.06732" }, { "id": "1508.06615" }, { "id": "1606.04474" }, { "id": "1608.05859" }, { "id": "1609.08144" }, { "id": "1606.01885" }, { "id": "1505.00387" }, { "id": "1609.07843" }, { "id": "1512.05287" }, { "id": "1603.09382" }, { "id": "1608.06993" }, { "id": "1605.07648" } ]
1611.01600
4
Besides the huge amount of computation and storage involved, deep networks are difficult to train because of the highly nonconvex objective and inhomogeneous curvature. To alleviate this problem, Hessian-free methods (Martens & Sutskever, 2012) use the second-order information by conjugate gradient. A related method is natural gradient descent (Pascanu & Bengio, 2014), which utilizes ge1 Published as a conference paper at ICLR 2017 ometry of the underlying parameter manifold. Another approach uses element-wise adaptive learn- ing rate, as in Adagrad (Duchi et al., 2011), Adadelta (Zeiler, 2012), RMSprop (Tieleman & Hinton, 2012), and Adam Kingma & Ba (2015). This can also be considered as preconditioning that rescales the gradient so that all dimensions have similar curvatures.
1611.01600#4
Loss-aware Binarization of Deep Networks
Deep neural network models, though very powerful and highly successful, are computationally expensive in terms of space and time. Recently, there have been a number of attempts on binarizing the network weights and activations. This greatly reduces the network size, and replaces the underlying multiplications to additions or even XNOR bit operations. However, existing binarization schemes are based on simple matrix approximation and ignore the effect of binarization on the loss. In this paper, we propose a proximal Newton algorithm with diagonal Hessian approximation that directly minimizes the loss w.r.t. the binarized weights. The underlying proximal step has an efficient closed-form solution, and the second-order information can be efficiently obtained from the second moments already computed by the Adam optimizer. Experiments on both feedforward and recurrent networks show that the proposed loss-aware binarization algorithm outperforms existing binarization schemes, and is also more robust for wide and deep networks.
http://arxiv.org/pdf/1611.01600
Lu Hou, Quanming Yao, James T. Kwok
cs.NE, cs.LG
null
null
cs.NE
20161105
20180510
[ { "id": "1605.04711" }, { "id": "1606.06160" }, { "id": "1502.04390" } ]
1611.01603
4
function of only the query and the context para- graph at the current time step and does not directly depend on the attention at the previous time step. We hypothesize that this simplification leads to the division of labor between the attention layer and the modeling layer. It forces the attention layer to focus on learning the attention between the query and the context, and enables the modeling layer to focus on learning the interaction within the
1611.01603#4
Bidirectional Attention Flow for Machine Comprehension
Machine comprehension (MC), answering a query about a given context paragraph, requires modeling complex interactions between the context and the query. Recently, attention mechanisms have been successfully extended to MC. Typically these methods use attention to focus on a small portion of the context and summarize it with a fixed-size vector, couple attentions temporally, and/or often form a uni-directional attention. In this paper we introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage hierarchical process that represents the context at different levels of granularity and uses bi-directional attention flow mechanism to obtain a query-aware context representation without early summarization. Our experimental evaluations show that our model achieves the state-of-the-art results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze test.
http://arxiv.org/pdf/1611.01603
Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi
cs.CL
Published as a conference paper at ICLR 2017
null
cs.CL
20161105
20180621
[ { "id": "1606.02245" }, { "id": "1608.07905" }, { "id": "1611.01604" }, { "id": "1609.05284" }, { "id": "1610.09996" }, { "id": "1606.01549" }, { "id": "1511.02274" }, { "id": "1505.00387" }, { "id": "1611.01436" }, { "id": "1611.01724" }, { "id": "1607.04423" } ]
1611.01626
4
In policy gradient techniques the policy is represented explicitly and we improve the policy by updating the parameters in the direction of the gradient of the performance (Sutton et al., 1999; Silver et al., 2014; Kakade, 2001). Online policy gradient typically requires an estimate of the action-value function of the current policy. For this reason they are often referred to as actor-critic methods, where the actor refers to the policy and the critic to the estimate of the action-value function (Konda & Tsitsiklis, 2003). Vanilla actor-critic methods are on-policy only, although some attempts have been made to extend them to off-policy data (Degris et al., 2012; Levine & Koltun, 2013). 1 Published as a conference paper at ICLR 2017
1611.01626#4
Combining policy gradient and Q-learning
Policy gradient is an efficient technique for improving a policy in a reinforcement learning setting. However, vanilla online variants are on-policy only and not able to take advantage of off-policy data. In this paper we describe a new technique that combines policy gradient with off-policy Q-learning, drawing experience from a replay buffer. This is motivated by making a connection between the fixed points of the regularized policy gradient algorithm and the Q-values. This connection allows us to estimate the Q-values from the action preferences of the policy, to which we apply Q-learning updates. We refer to the new technique as 'PGQL', for policy gradient and Q-learning. We also establish an equivalency between action-value fitting techniques and actor-critic algorithms, showing that regularized policy gradient techniques can be interpreted as advantage function learning algorithms. We conclude with some numerical examples that demonstrate improved data efficiency and stability of PGQL. In particular, we tested PGQL on the full suite of Atari games and achieved performance exceeding that of both asynchronous advantage actor-critic (A3C) and Q-learning.
http://arxiv.org/pdf/1611.01626
Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih
cs.LG, cs.AI, math.OC, stat.ML
null
null
cs.LG
20161105
20170407
[ { "id": "1602.01783" }, { "id": "1509.02971" }, { "id": "1609.00150" }, { "id": "1512.04105" }, { "id": "1511.05952" }, { "id": "1504.00702" } ]
1611.01673
4
Published as a conference paper at ICLR 2017 performance and evaluate our framework on a variety of image generation tasks. Section 6 concludes with a summary of our contributions and directions for future research. Contributions—To summarize, our main contributions are: i) a multi-discriminator GAN frame- work, GMAN, that allows training with the original, untampered minimax objective; ii) a generative multi-adversarial metric (GMAM) to perform pairwise evaluation of separately trained frameworks; iii) a particular instance of GMAN, GMAN’%, that allows the generator to automatically regulate training and reach higher performance (as measured by GMAM) in a fraction of the training time required for the standard GAN model. 2 GENERATIVE ADVERSARIAL NETWORKS TO GMAN The original formulation of a GAN is a minimax game between a generator, Gg(z) : z > x, anda discriminator, D,,(x) : « > {0, 1], min max V(D,G) = Exnpsata(e) [1og(D(x))| + Ezvp.(z) [log(t - D(Ge)))| ; (1)
1611.01673#4
Generative Multi-Adversarial Networks
Generative adversarial networks (GANs) are a framework for producing a generative model by way of a two-player minimax game. In this paper, we propose the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that extends GANs to multiple discriminators. In previous work, the successful training of GANs requires modifying the minimax objective to accelerate training early on. In contrast, GMAN can be reliably trained with the original, untampered objective. We explore a number of design perspectives with the discriminator role ranging from formidable adversary to forgiving teacher. Image generation tasks comparing the proposed framework to standard GANs demonstrate GMAN produces higher quality samples in a fraction of the iterations when measured by a pairwise GAM-type metric.
http://arxiv.org/pdf/1611.01673
Ishan Durugkar, Ian Gemp, Sridhar Mahadevan
cs.LG, cs.MA, cs.NE
Accepted as a conference paper (poster) at ICLR 2017
null
cs.LG
20161105
20170302
[ { "id": "1511.06390" }, { "id": "1511.05897" }, { "id": "1610.02920" } ]
1611.01576
5
Figure 1: Block diagrams showing the computation structure of the QRNN compared with typical LSTM and CNN architectures. Red signifies convolutions or matrix multiplications; a continuous block means that those computations can proceed in parallel. Blue signifies parameterless functions that operate in parallel along the channel/feature dimension. LSTMs can be factored into (red) linear blocks and (blue) elementwise blocks, but computation at each timestep still depends on the results from the previous timestep. # 2 MODEL
1611.01576#5
Quasi-Recurrent Neural Networks
Recurrent neural networks are a powerful tool for modeling sequential data, but the dependence of each timestep's computation on the previous timestep's output limits parallelism and makes RNNs unwieldy for very long sequences. We introduce quasi-recurrent neural networks (QRNNs), an approach to neural sequence modeling that alternates convolutional layers, which apply in parallel across timesteps, and a minimalist recurrent pooling function that applies in parallel across channels. Despite lacking trainable recurrent layers, stacked QRNNs have better predictive accuracy than stacked LSTMs of the same hidden size. Due to their increased parallelism, they are up to 16 times faster at train and test time. Experiments on language modeling, sentiment classification, and character-level neural machine translation demonstrate these advantages and underline the viability of QRNNs as a basic building block for a variety of sequence tasks.
http://arxiv.org/pdf/1611.01576
James Bradbury, Stephen Merity, Caiming Xiong, Richard Socher
cs.NE, cs.AI, cs.CL, cs.LG
Submitted to conference track at ICLR 2017
null
cs.NE
20161105
20161121
[ { "id": "1605.07725" }, { "id": "1508.06615" }, { "id": "1606.01305" }, { "id": "1610.10099" }, { "id": "1609.08144" }, { "id": "1602.00367" }, { "id": "1511.08630" }, { "id": "1609.07843" }, { "id": "1608.06993" }, { "id": "1610.03017" }, { "id": "1601.06759" }, { "id": "1606.02960" } ]
1611.01578
5
# 2 RELATED WORK Hyperparameter optimization is an important research topic in machine learning, and is widely used in practice (Bergstra et al., 2011; Bergstra & Bengio, 2012; Snoek et al., 2012; 2015; Saxena & Verbeek, 2016). Despite their success, these methods are still limited in that they only search models from a fixed-length space. In other words, it is difficult to ask them to generate a variable-length configuration that specifies the structure and connectivity of a network. In practice, these methods often work better if they are supplied with a good initial model (Bergstra & Bengio, 2012; Snoek et al., 2012; 2015). There are Bayesian optimization methods that allow to search non fixed length architectures (Bergstra et al., 2013; Mendoza et al., 2016), but they are less general and less flexible than the method proposed in this paper.
1611.01578#5
Neural Architecture Search with Reinforcement Learning
Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214.
http://arxiv.org/pdf/1611.01578
Barret Zoph, Quoc V. Le
cs.LG, cs.AI, cs.NE
null
null
cs.LG
20161105
20170215
[ { "id": "1611.01462" }, { "id": "1607.03474" }, { "id": "1603.05027" }, { "id": "1609.09106" }, { "id": "1511.06732" }, { "id": "1508.06615" }, { "id": "1606.04474" }, { "id": "1608.05859" }, { "id": "1609.08144" }, { "id": "1606.01885" }, { "id": "1505.00387" }, { "id": "1609.07843" }, { "id": "1512.05287" }, { "id": "1603.09382" }, { "id": "1608.06993" }, { "id": "1605.07648" } ]
1611.01600
5
In this paper, instead of directly approximating the weights, we propose to consider the effect of binarization on the loss during binarization. We formulate this as an optimization problem using the proximal Newton algorithm (Lee et al., 2014) with a diagonal Hessian. The crux of proximal algorithms is the proximal step. We show that this step has a closed-form solution, whose form is similar to the use of element-wise adaptive learning rate. The proposed method also reduces to Bi- naryConnect (Courbariaux et al., 2015) and the Binary-Weight-Network (Hubara et al., 2016) when curvature information is dropped. Experiments on both feedforward and recurrent neural network models show that it outperforms existing binarization algorithms. In particular, BinaryConnect fails on deep recurrent networks because of the exploding gradient problem, while the proposed method still demonstrates robust performance. √
1611.01600#5
Loss-aware Binarization of Deep Networks
Deep neural network models, though very powerful and highly successful, are computationally expensive in terms of space and time. Recently, there have been a number of attempts on binarizing the network weights and activations. This greatly reduces the network size, and replaces the underlying multiplications to additions or even XNOR bit operations. However, existing binarization schemes are based on simple matrix approximation and ignore the effect of binarization on the loss. In this paper, we propose a proximal Newton algorithm with diagonal Hessian approximation that directly minimizes the loss w.r.t. the binarized weights. The underlying proximal step has an efficient closed-form solution, and the second-order information can be efficiently obtained from the second moments already computed by the Adam optimizer. Experiments on both feedforward and recurrent networks show that the proposed loss-aware binarization algorithm outperforms existing binarization schemes, and is also more robust for wide and deep networks.
http://arxiv.org/pdf/1611.01600
Lu Hou, Quanming Yao, James T. Kwok
cs.NE, cs.LG
null
null
cs.NE
20161105
20180510
[ { "id": "1605.04711" }, { "id": "1606.06160" }, { "id": "1502.04390" } ]
1611.01603
5
∗The majority of the work was done while the author was interning at the Allen Institute for AI. 1 Published as a conference paper at ICLR 2017 Start End Query2Context Tec | wamecens _ as Output Layer Soameatuy ces My me my g = 4 <= Slelsteteti eld up 01000 Modeinglaye | = 7. EE 5 hy hp hr il vy ry ry a [iio ffl | : o Context2Query ASitenitsin Gea fr Query2Context and Context2Query t{ttttt ey Attention fi fi fi fi fi fi uy SSeS LS rot a ES Embed Layer 5 L | L] L] aL | LC] SHeeterterter tee — Uy Word Embed [ hy he pr Layer Charact Word Character pentecallciier c =) ] ! Embedding Embedding xy X2 X3 XT qh qu u J t 7 GLOVE Char-CNN Context Query Figure 1: BiDirectional Attention Flow Model (best viewed in color)
1611.01603#5
Bidirectional Attention Flow for Machine Comprehension
Machine comprehension (MC), answering a query about a given context paragraph, requires modeling complex interactions between the context and the query. Recently, attention mechanisms have been successfully extended to MC. Typically these methods use attention to focus on a small portion of the context and summarize it with a fixed-size vector, couple attentions temporally, and/or often form a uni-directional attention. In this paper we introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage hierarchical process that represents the context at different levels of granularity and uses bi-directional attention flow mechanism to obtain a query-aware context representation without early summarization. Our experimental evaluations show that our model achieves the state-of-the-art results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze test.
http://arxiv.org/pdf/1611.01603
Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi
cs.CL
Published as a conference paper at ICLR 2017
null
cs.CL
20161105
20180621
[ { "id": "1606.02245" }, { "id": "1608.07905" }, { "id": "1611.01604" }, { "id": "1609.05284" }, { "id": "1610.09996" }, { "id": "1606.01549" }, { "id": "1511.02274" }, { "id": "1505.00387" }, { "id": "1611.01436" }, { "id": "1611.01724" }, { "id": "1607.04423" } ]
1611.01626
5
1 Published as a conference paper at ICLR 2017 In this paper we derive a link between the Q-values induced by a policy and the policy itself when the policy is the fixed point of a regularized policy gradient algorithm (where the gradient van- ishes). This connection allows us to derive an estimate of the Q-values from the current policy, which we can refine using off-policy data and Q-learning. We show in the tabular setting that when the regularization penalty is small (the usual case) the resulting policy is close to the policy that would be found without the addition of the Q-learning update. Separately, we show that regularized actor-critic methods can be interpreted as action-value fitting methods, where the Q-values have been parameterized in a particular way. We conclude with some numerical examples that provide empirical evidence of improved data efficiency and stability of PGQL. 1.1 PRIOR WORK
1611.01626#5
Combining policy gradient and Q-learning
Policy gradient is an efficient technique for improving a policy in a reinforcement learning setting. However, vanilla online variants are on-policy only and not able to take advantage of off-policy data. In this paper we describe a new technique that combines policy gradient with off-policy Q-learning, drawing experience from a replay buffer. This is motivated by making a connection between the fixed points of the regularized policy gradient algorithm and the Q-values. This connection allows us to estimate the Q-values from the action preferences of the policy, to which we apply Q-learning updates. We refer to the new technique as 'PGQL', for policy gradient and Q-learning. We also establish an equivalency between action-value fitting techniques and actor-critic algorithms, showing that regularized policy gradient techniques can be interpreted as advantage function learning algorithms. We conclude with some numerical examples that demonstrate improved data efficiency and stability of PGQL. In particular, we tested PGQL on the full suite of Atari games and achieved performance exceeding that of both asynchronous advantage actor-critic (A3C) and Q-learning.
http://arxiv.org/pdf/1611.01626
Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih
cs.LG, cs.AI, math.OC, stat.ML
null
null
cs.LG
20161105
20170407
[ { "id": "1602.01783" }, { "id": "1509.02971" }, { "id": "1609.00150" }, { "id": "1512.04105" }, { "id": "1511.05952" }, { "id": "1504.00702" } ]
1611.01673
5
where Pdata(2) is the true data distribution and p,(z) is a simple (usually fixed) distribution that is easy to draw samples from (e.g., (0, 1)). We differentiate between the function space of discrim- inators, D, and elements of this space, D. Let pg(«) be the distribution induced by the generator, Go(z). We assume D, G to be deep neural networks as is typically the case. In their original work, Goodfellow et al. (2014) proved that given sufficient network capacities and an oracle providing the optimal discriminator, D* = arg maxp V (D,G), gradient descent on pa(x) will recover the desired globally optimal solution, pg(x) = Paata(x), so that the generator distribution exactly matches the data distribution. In practice, they replaced the second term, log(1— D(G(z))), with — log(D(G(z))) to enhance gradient signals at the start of the game; note this is no longer a zero-sum game. Part of their convergence and optimality proof involves using the oracle, D*, to reduce the minimax game to a minimization over G only: min V(D*,G) = min {C(G) = —log(4) +2 JSD(Paatallpc) } (2)
1611.01673#5
Generative Multi-Adversarial Networks
Generative adversarial networks (GANs) are a framework for producing a generative model by way of a two-player minimax game. In this paper, we propose the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that extends GANs to multiple discriminators. In previous work, the successful training of GANs requires modifying the minimax objective to accelerate training early on. In contrast, GMAN can be reliably trained with the original, untampered objective. We explore a number of design perspectives with the discriminator role ranging from formidable adversary to forgiving teacher. Image generation tasks comparing the proposed framework to standard GANs demonstrate GMAN produces higher quality samples in a fraction of the iterations when measured by a pairwise GAM-type metric.
http://arxiv.org/pdf/1611.01673
Ishan Durugkar, Ian Gemp, Sridhar Mahadevan
cs.LG, cs.MA, cs.NE
Accepted as a conference paper (poster) at ICLR 2017
null
cs.LG
20161105
20170302
[ { "id": "1511.06390" }, { "id": "1511.05897" }, { "id": "1610.02920" } ]
1611.01576
6
# 2 MODEL Each layer of a quasi-recurrent neural network consists of two kinds of subcomponents, analogous to convolution and pooling layers in CNNs. The convolutional component, like convolutional layers in CNNs, allows fully parallel computation across both minibatches and spatial dimensions, in this case the sequence dimension. The pooling component, like pooling layers in CNNs, lacks trainable parameters and allows fully parallel computation across minibatch and feature dimensions. Given an input sequence X ∈ RT ×n of T n-dimensional vectors x1 . . . xT , the convolutional sub- component of a QRNN performs convolutions in the timestep dimension with a bank of m filters, producing a sequence Z ∈ RT ×m of m-dimensional candidate vectors zt. In order to be useful for tasks that include prediction of the next token, the filters must not allow the computation for any given timestep to access information from future timesteps. That is, with filters of width k, each zt depends only on xt−k+1 through xt. This concept, known as a masked convolution (van den Oord et al., 2016), is implemented by padding the input to the left by the convolution’s filter size minus one.
1611.01576#6
Quasi-Recurrent Neural Networks
Recurrent neural networks are a powerful tool for modeling sequential data, but the dependence of each timestep's computation on the previous timestep's output limits parallelism and makes RNNs unwieldy for very long sequences. We introduce quasi-recurrent neural networks (QRNNs), an approach to neural sequence modeling that alternates convolutional layers, which apply in parallel across timesteps, and a minimalist recurrent pooling function that applies in parallel across channels. Despite lacking trainable recurrent layers, stacked QRNNs have better predictive accuracy than stacked LSTMs of the same hidden size. Due to their increased parallelism, they are up to 16 times faster at train and test time. Experiments on language modeling, sentiment classification, and character-level neural machine translation demonstrate these advantages and underline the viability of QRNNs as a basic building block for a variety of sequence tasks.
http://arxiv.org/pdf/1611.01576
James Bradbury, Stephen Merity, Caiming Xiong, Richard Socher
cs.NE, cs.AI, cs.CL, cs.LG
Submitted to conference track at ICLR 2017
null
cs.NE
20161105
20161121
[ { "id": "1605.07725" }, { "id": "1508.06615" }, { "id": "1606.01305" }, { "id": "1610.10099" }, { "id": "1609.08144" }, { "id": "1602.00367" }, { "id": "1511.08630" }, { "id": "1609.07843" }, { "id": "1608.06993" }, { "id": "1610.03017" }, { "id": "1601.06759" }, { "id": "1606.02960" } ]
1611.01578
6
Modern neuro-evolution algorithms, e.g., Wierstra et al. (2005); Floreano et al. (2008); Stanley et al. (2009), on the other hand, are much more flexible for composing novel models, yet they are usually less practical at a large scale. Their limitations lie in the fact that they are search-based methods, thus they are slow or require many heuristics to work well. Neural Architecture Search has some parallels to program synthesis and inductive programming, the idea of searching a program from examples (Summers, 1977; Biermann, 1978). In machine learning, probabilistic program induction has been used successfully in many settings, such as learning to solve simple Q&A (Liang et al., 2010; Neelakantan et al., 2015; Andreas et al., 2016), sort a list of numbers (Reed & de Freitas, 2015), and learning with very few examples (Lake et al., 2015).
1611.01578#6
Neural Architecture Search with Reinforcement Learning
Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214.
http://arxiv.org/pdf/1611.01578
Barret Zoph, Quoc V. Le
cs.LG, cs.AI, cs.NE
null
null
cs.LG
20161105
20170215
[ { "id": "1611.01462" }, { "id": "1607.03474" }, { "id": "1603.05027" }, { "id": "1609.09106" }, { "id": "1511.06732" }, { "id": "1508.06615" }, { "id": "1606.04474" }, { "id": "1608.05859" }, { "id": "1609.08144" }, { "id": "1606.01885" }, { "id": "1505.00387" }, { "id": "1609.07843" }, { "id": "1512.05287" }, { "id": "1603.09382" }, { "id": "1608.06993" }, { "id": "1605.07648" } ]
1611.01600
6
√ Notations: For a vector x, \/x denotes the element-wise square root, |x| denotes the element-wise absolute value, ||x||,) = (0; |x|?) is the p-norm of x, x + 0 denotes that all entries of x are positive, sign(x) is the vector with [sign(x)]; = lif x; > 0 and —1 otherwise, and Diag(x) returns a diagonal matrix with x on the diagonal. For two vectors x and y, x © y denotes the element- wise multiplication and x @ y denotes the element-wise division. For a matrix X, vec(X) returns the vector obtained by stacking the columns of X, and diag(X) returns a diagonal matrix whose diagonal elements are extracted from diagonal of X. # 2 RELATED WORK 2.1 WEIGHT BINARIZATION IN DEEP NETWORKS
1611.01600#6
Loss-aware Binarization of Deep Networks
Deep neural network models, though very powerful and highly successful, are computationally expensive in terms of space and time. Recently, there have been a number of attempts on binarizing the network weights and activations. This greatly reduces the network size, and replaces the underlying multiplications to additions or even XNOR bit operations. However, existing binarization schemes are based on simple matrix approximation and ignore the effect of binarization on the loss. In this paper, we propose a proximal Newton algorithm with diagonal Hessian approximation that directly minimizes the loss w.r.t. the binarized weights. The underlying proximal step has an efficient closed-form solution, and the second-order information can be efficiently obtained from the second moments already computed by the Adam optimizer. Experiments on both feedforward and recurrent networks show that the proposed loss-aware binarization algorithm outperforms existing binarization schemes, and is also more robust for wide and deep networks.
http://arxiv.org/pdf/1611.01600
Lu Hou, Quanming Yao, James T. Kwok
cs.NE, cs.LG
null
null
cs.NE
20161105
20180510
[ { "id": "1605.04711" }, { "id": "1606.06160" }, { "id": "1502.04390" } ]
1611.01603
6
Figure 1: BiDirectional Attention Flow Model (best viewed in color) query-aware context representation (the output of the attention layer). It also allows the attention at each time step to be unaffected from incorrect attendances at previous time steps. Our experi- ments show that memory-less attention gives a clear advantage over dynamic attention. Third, we use attention mechanisms in both directions, query-to-context and context-to-query, which provide complimentary information to each other. Our BIDAF model1 outperforms all previous approaches on the highly-competitive Stanford Ques- tion Answering Dataset (SQuAD) test set leaderboard at the time of submission. With a modification to only the output layer, BIDAF achieves the state-of-the-art results on the CNN/DailyMail cloze test. We also provide an in-depth ablation study of our model on the SQuAD development set, vi- sualize the intermediate feature spaces in our model, and analyse its performance as compared to a more traditional language model for machine comprehension (Rajpurkar et al., 2016). 2 MODEL Our machine comprehension model is a hierarchical multi-stage process and consists of six layers (Figure 1): 1. Character Embedding Layer maps each word to a vector space using character-level CNNs.
1611.01603#6
Bidirectional Attention Flow for Machine Comprehension
Machine comprehension (MC), answering a query about a given context paragraph, requires modeling complex interactions between the context and the query. Recently, attention mechanisms have been successfully extended to MC. Typically these methods use attention to focus on a small portion of the context and summarize it with a fixed-size vector, couple attentions temporally, and/or often form a uni-directional attention. In this paper we introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage hierarchical process that represents the context at different levels of granularity and uses bi-directional attention flow mechanism to obtain a query-aware context representation without early summarization. Our experimental evaluations show that our model achieves the state-of-the-art results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze test.
http://arxiv.org/pdf/1611.01603
Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi
cs.CL
Published as a conference paper at ICLR 2017
null
cs.CL
20161105
20180621
[ { "id": "1606.02245" }, { "id": "1608.07905" }, { "id": "1611.01604" }, { "id": "1609.05284" }, { "id": "1610.09996" }, { "id": "1606.01549" }, { "id": "1511.02274" }, { "id": "1505.00387" }, { "id": "1611.01436" }, { "id": "1611.01724" }, { "id": "1607.04423" } ]
1611.01626
6
Here we highlight various axes along which our work can be compared to others. In this paper we use entropy regularization to ensure exploration in the policy, which is a common practice in policy gradient (Williams & Peng, 1991; Mnih et al., 2016). An alternative is to use KL-divergence instead of entropy as a regularizer, or as a constraint on how much deviation is permitted from a prior policy (Bagnell & Schneider, 2003; Peters et al., 2010; Schulman et al., 2015; Fox et al., 2015). Natural policy gradient can also be interpreted as putting a constraint on the KL-divergence at each step of the policy improvement (Amari, 1998; Kakade, 2001; Pascanu & Bengio, 2013). In Sallans & Hinton (2004) the authors use a Boltzmann exploration policy over estimated Q-values which they update using TD-learning. In Heess et al. (2012) this was extended to use an actor-critic algorithm instead of TD-learning, however the two updates were not combined as we have done in this paper. In Azar et al. (2012) the authors develop an algorithm called dynamic policy programming, whereby they apply a Bellman-like update
1611.01626#6
Combining policy gradient and Q-learning
Policy gradient is an efficient technique for improving a policy in a reinforcement learning setting. However, vanilla online variants are on-policy only and not able to take advantage of off-policy data. In this paper we describe a new technique that combines policy gradient with off-policy Q-learning, drawing experience from a replay buffer. This is motivated by making a connection between the fixed points of the regularized policy gradient algorithm and the Q-values. This connection allows us to estimate the Q-values from the action preferences of the policy, to which we apply Q-learning updates. We refer to the new technique as 'PGQL', for policy gradient and Q-learning. We also establish an equivalency between action-value fitting techniques and actor-critic algorithms, showing that regularized policy gradient techniques can be interpreted as advantage function learning algorithms. We conclude with some numerical examples that demonstrate improved data efficiency and stability of PGQL. In particular, we tested PGQL on the full suite of Atari games and achieved performance exceeding that of both asynchronous advantage actor-critic (A3C) and Q-learning.
http://arxiv.org/pdf/1611.01626
Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih
cs.LG, cs.AI, math.OC, stat.ML
null
null
cs.LG
20161105
20170407
[ { "id": "1602.01783" }, { "id": "1509.02971" }, { "id": "1609.00150" }, { "id": "1512.04105" }, { "id": "1511.05952" }, { "id": "1504.00702" } ]
1611.01673
6
min V(D*,G) = min {C(G) = —log(4) +2 JSD(Paatallpc) } (2) where JSD denotes Jensen-Shannon divergence. Minimizing C(G) necessarily minimizes JS'D, however, we rarely know D* and so we instead minimize V(D, G), which is only a lower bound. This perspective of minimizing the distance between the distributions, paara and pg, motivated Li et al. (2015) to develop a generative model that matches all moments of pg(x) with Paata(x) (at optimality) by minimizing maximum mean discrepancy (MMD). Another approach, EBGAN, (Zhao et al. (2016)) explores a larger class of games (non-zero-sum games) which generalize the generator and discriminator objectives to take real-valued “energies” as input instead of probabilities. Nowozin et al. (2016) and then Uehara et al. (2016) extended the JSD perspective on GANS to more general divergences, specifically f-divergences and then Bregman-divergences respectively.
1611.01673#6
Generative Multi-Adversarial Networks
Generative adversarial networks (GANs) are a framework for producing a generative model by way of a two-player minimax game. In this paper, we propose the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that extends GANs to multiple discriminators. In previous work, the successful training of GANs requires modifying the minimax objective to accelerate training early on. In contrast, GMAN can be reliably trained with the original, untampered objective. We explore a number of design perspectives with the discriminator role ranging from formidable adversary to forgiving teacher. Image generation tasks comparing the proposed framework to standard GANs demonstrate GMAN produces higher quality samples in a fraction of the iterations when measured by a pairwise GAM-type metric.
http://arxiv.org/pdf/1611.01673
Ishan Durugkar, Ian Gemp, Sridhar Mahadevan
cs.LG, cs.MA, cs.NE
Accepted as a conference paper (poster) at ICLR 2017
null
cs.LG
20161105
20170302
[ { "id": "1511.06390" }, { "id": "1511.05897" }, { "id": "1610.02920" } ]
1611.01576
7
We apply additional convolutions with separate filter banks to obtain sequences of vectors for the elementwise gates that are needed for the pooling function. While the candidate vectors are passed through a tanh nonlinearity, the gates use an elementwise sigmoid. If the pooling function requires a forget gate ft and an output gate ot at each timestep, the full set of computations in the convolutional component is then: Z = tanh(Wz ∗ X) F = σ(Wf ∗ X) O = σ(Wo ∗ X), (1) where Wz,Wf , and Wo, each in Rk×n×m, are the convolutional filter banks and ∗ denotes a masked convolution along the timestep dimension. Note that if the filter width is 2, these equations reduce to the LSTM-like zt = tanh(W1 ft = σ(W1 ot = σ(W1 zxt−1 + W2 f xt) oxt). zxt) f xt−1 + W2 oxt−1 + W2 (2) Convolution filters of larger width effectively compute higher n-gram features at each timestep; thus larger widths are especially important for character-level tasks.
1611.01576#7
Quasi-Recurrent Neural Networks
Recurrent neural networks are a powerful tool for modeling sequential data, but the dependence of each timestep's computation on the previous timestep's output limits parallelism and makes RNNs unwieldy for very long sequences. We introduce quasi-recurrent neural networks (QRNNs), an approach to neural sequence modeling that alternates convolutional layers, which apply in parallel across timesteps, and a minimalist recurrent pooling function that applies in parallel across channels. Despite lacking trainable recurrent layers, stacked QRNNs have better predictive accuracy than stacked LSTMs of the same hidden size. Due to their increased parallelism, they are up to 16 times faster at train and test time. Experiments on language modeling, sentiment classification, and character-level neural machine translation demonstrate these advantages and underline the viability of QRNNs as a basic building block for a variety of sequence tasks.
http://arxiv.org/pdf/1611.01576
James Bradbury, Stephen Merity, Caiming Xiong, Richard Socher
cs.NE, cs.AI, cs.CL, cs.LG
Submitted to conference track at ICLR 2017
null
cs.NE
20161105
20161121
[ { "id": "1605.07725" }, { "id": "1508.06615" }, { "id": "1606.01305" }, { "id": "1610.10099" }, { "id": "1609.08144" }, { "id": "1602.00367" }, { "id": "1511.08630" }, { "id": "1609.07843" }, { "id": "1608.06993" }, { "id": "1610.03017" }, { "id": "1601.06759" }, { "id": "1606.02960" } ]
1611.01578
7
The controller in Neural Architecture Search is auto-regressive, which means it predicts hyperpa- rameters one a time, conditioned on previous predictions. This idea is borrowed from the decoder in end-to-end sequence to sequence learning (Sutskever et al., 2014). Unlike sequence to sequence learning, our method optimizes a non-differentiable metric, which is the accuracy of the child net- work. It is therefore similar to the work on BLEU optimization in Neural Machine Translation (Ran- zato et al., 2015; Shen et al., 2016). Unlike these approaches, our method learns directly from the reward signal without any supervised bootstrapping. Also related to our work is the idea of learning to learn or meta-learning (Thrun & Pratt, 2012), a general framework of using information learned in one task to improve a future task. More closely related is the idea of using a neural network to learn the gradient descent updates for another net- work (Andrychowicz et al., 2016) and the idea of using reinforcement learning to find update policies for another network (Li & Malik, 2016). # 3 METHODS
1611.01578#7
Neural Architecture Search with Reinforcement Learning
Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214.
http://arxiv.org/pdf/1611.01578
Barret Zoph, Quoc V. Le
cs.LG, cs.AI, cs.NE
null
null
cs.LG
20161105
20170215
[ { "id": "1611.01462" }, { "id": "1607.03474" }, { "id": "1603.05027" }, { "id": "1609.09106" }, { "id": "1511.06732" }, { "id": "1508.06615" }, { "id": "1606.04474" }, { "id": "1608.05859" }, { "id": "1609.08144" }, { "id": "1606.01885" }, { "id": "1505.00387" }, { "id": "1609.07843" }, { "id": "1512.05287" }, { "id": "1603.09382" }, { "id": "1608.06993" }, { "id": "1605.07648" } ]
1611.01600
7
# 2 RELATED WORK 2.1 WEIGHT BINARIZATION IN DEEP NETWORKS In a feedforward neural network with L layers, let the weight matrix (or tensor in the case of a convolutional layer) at layer 1 be W;. We combine the (full-precision) weights from all layers as w =[w],w3,...,w/]', where w; = vec(W,). Analogously, the binarized weights are denoted as W = [Ww] ,WJ,..., W]]". As it is essential to use full-precision weights during updates (2015), typically binarized weights are only used during the forward and backward propagations, but not on parameter update. At the ¢th iteration, the (full-precision) weight w} is updated by using the backpropagated gradient V¢(w‘—') (where £ is the loss and V;£(w'~') is the partial derivative of ¢ w.r.t. the weights of the /th layer). In the next forward propagation, it is then binarized as W} = Binarize(w/), where Binarize(-) is some binarization scheme.
1611.01600#7
Loss-aware Binarization of Deep Networks
Deep neural network models, though very powerful and highly successful, are computationally expensive in terms of space and time. Recently, there have been a number of attempts on binarizing the network weights and activations. This greatly reduces the network size, and replaces the underlying multiplications to additions or even XNOR bit operations. However, existing binarization schemes are based on simple matrix approximation and ignore the effect of binarization on the loss. In this paper, we propose a proximal Newton algorithm with diagonal Hessian approximation that directly minimizes the loss w.r.t. the binarized weights. The underlying proximal step has an efficient closed-form solution, and the second-order information can be efficiently obtained from the second moments already computed by the Adam optimizer. Experiments on both feedforward and recurrent networks show that the proposed loss-aware binarization algorithm outperforms existing binarization schemes, and is also more robust for wide and deep networks.
http://arxiv.org/pdf/1611.01600
Lu Hou, Quanming Yao, James T. Kwok
cs.NE, cs.LG
null
null
cs.NE
20161105
20180510
[ { "id": "1605.04711" }, { "id": "1606.06160" }, { "id": "1502.04390" } ]
1611.01603
7
1. Character Embedding Layer maps each word to a vector space using character-level CNNs. 2. Word Embedding Layer maps each word to a vector space using a pre-trained word em- bedding model. 3. Contextual Embedding Layer utilizes contextual cues from surrounding words to refine the embedding of the words. These first three layers are applied to both the query and context. 4. Attention Flow Layer couples the query and context vectors and produces a set of query- aware feature vectors for each word in the context. 5. Modeling Layer employs a Recurrent Neural Network to scan the context. 6. Output Layer provides an answer to the query. 1Our code and interactive demo are available at: allenai.github.io/bi-att-flow/ 2 Published as a conference paper at ICLR 2017
1611.01603#7
Bidirectional Attention Flow for Machine Comprehension
Machine comprehension (MC), answering a query about a given context paragraph, requires modeling complex interactions between the context and the query. Recently, attention mechanisms have been successfully extended to MC. Typically these methods use attention to focus on a small portion of the context and summarize it with a fixed-size vector, couple attentions temporally, and/or often form a uni-directional attention. In this paper we introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage hierarchical process that represents the context at different levels of granularity and uses bi-directional attention flow mechanism to obtain a query-aware context representation without early summarization. Our experimental evaluations show that our model achieves the state-of-the-art results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze test.
http://arxiv.org/pdf/1611.01603
Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi
cs.CL
Published as a conference paper at ICLR 2017
null
cs.CL
20161105
20180621
[ { "id": "1606.02245" }, { "id": "1608.07905" }, { "id": "1611.01604" }, { "id": "1609.05284" }, { "id": "1610.09996" }, { "id": "1606.01549" }, { "id": "1511.02274" }, { "id": "1505.00387" }, { "id": "1611.01436" }, { "id": "1611.01724" }, { "id": "1607.04423" } ]
1611.01626
7
as we have done in this paper. In Azar et al. (2012) the authors develop an algorithm called dynamic policy programming, whereby they apply a Bellman-like update to the action-preferences of a policy, which is similar in spirit to the update we describe here. In Norouzi et al. (2016) the authors augment a maximum likelihood objective with a reward in a supervised learning setting, and develop a connection that resembles the one we develop here between the policy and the Q-values. Other works have attempted to com- bine on and off-policy learning, primarily using action-value fitting methods (Wang et al., 2013; Hausknecht & Stone, 2016; Lehnert & Precup, 2015), with varying degrees of success. In this paper we establish a connection between actor-critic algorithms and action-value learning algorithms. In particular we show that TD-actor-critic (Konda & Tsitsiklis, 2003) is equivalent to expected-SARSA (Sutton & Barto, 1998, Exercise 6.10) with Boltzmann exploration where the Q-values are decom- posed into advantage function and value function. The algorithm we develop extends actor-critic with a Q-learning style update
1611.01626#7
Combining policy gradient and Q-learning
Policy gradient is an efficient technique for improving a policy in a reinforcement learning setting. However, vanilla online variants are on-policy only and not able to take advantage of off-policy data. In this paper we describe a new technique that combines policy gradient with off-policy Q-learning, drawing experience from a replay buffer. This is motivated by making a connection between the fixed points of the regularized policy gradient algorithm and the Q-values. This connection allows us to estimate the Q-values from the action preferences of the policy, to which we apply Q-learning updates. We refer to the new technique as 'PGQL', for policy gradient and Q-learning. We also establish an equivalency between action-value fitting techniques and actor-critic algorithms, showing that regularized policy gradient techniques can be interpreted as advantage function learning algorithms. We conclude with some numerical examples that demonstrate improved data efficiency and stability of PGQL. In particular, we tested PGQL on the full suite of Atari games and achieved performance exceeding that of both asynchronous advantage actor-critic (A3C) and Q-learning.
http://arxiv.org/pdf/1611.01626
Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih
cs.LG, cs.AI, math.OC, stat.ML
null
null
cs.LG
20161105
20170407
[ { "id": "1602.01783" }, { "id": "1509.02971" }, { "id": "1609.00150" }, { "id": "1512.04105" }, { "id": "1511.05952" }, { "id": "1504.00702" } ]
1611.01673
7
In general, these approaches focus on exploring fundamental reformulations of V (D, G). Similarly, our work focuses on a fundamental reformulation, however, our aim is to provide a framework that accelerates training of the generator to a more robust state irrespective of the choice of V. 2.1 GMAN: A MULTI-ADVERSARIAL EXTENSION We propose introducing multiple discriminators, which brings with it a number of design possibil- ities. We explore approaches ranging between two extremes: 1) a more discriminating D (better approximating maxp V(D,G)) and 2) a D better matched to the generator’s capabilities. Math- ematically, we reformulate G’s objective as ming max F(V(D1,G),...,V(Dwy,G)) for different choices of F’ (see Figure 1). Each D; is still expected to independently maximize its own V(D;, G) (i.e. no cooperation). We sometimes abbreviate V (D;,G) with V; and F(Vi,..., Vx) with Fg(Vi). # 3 A FORMIDABLE ADVERSARY Here, we consider multi-discriminator variants that attempt to better approximate maxp V (D,G), providing a harsher critic to the generator. Published as a conference paper at ICLR 2017
1611.01673#7
Generative Multi-Adversarial Networks
Generative adversarial networks (GANs) are a framework for producing a generative model by way of a two-player minimax game. In this paper, we propose the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that extends GANs to multiple discriminators. In previous work, the successful training of GANs requires modifying the minimax objective to accelerate training early on. In contrast, GMAN can be reliably trained with the original, untampered objective. We explore a number of design perspectives with the discriminator role ranging from formidable adversary to forgiving teacher. Image generation tasks comparing the proposed framework to standard GANs demonstrate GMAN produces higher quality samples in a fraction of the iterations when measured by a pairwise GAM-type metric.
http://arxiv.org/pdf/1611.01673
Ishan Durugkar, Ian Gemp, Sridhar Mahadevan
cs.LG, cs.MA, cs.NE
Accepted as a conference paper (poster) at ICLR 2017
null
cs.LG
20161105
20170302
[ { "id": "1511.06390" }, { "id": "1511.05897" }, { "id": "1610.02920" } ]
1611.01576
8
Convolution filters of larger width effectively compute higher n-gram features at each timestep; thus larger widths are especially important for character-level tasks. Suitable functions for the pooling subcomponent can be constructed from the familiar elementwise gates of the traditional LSTM cell. We seek a function controlled by gates that can mix states across timesteps, but which acts independently on each channel of the state vector. The simplest option, which Balduzzi & Ghifary (2016) term “dynamic average pooling”, uses only a forget gate: hy = f © by-1 + (1 — f:) O x, (3) 2 # Under review as a conference paper at ICLR 2017 where © denotes elementwise multiplication. The function may also include an output gate: ec, =f OG_1 + (1-f;) Ou _ (4) hy = 0; ©. Or the recurrence relation may include an independent input and forget gate: ce =f0q-1. +h) On (5) hy = 0; © Cy.
1611.01576#8
Quasi-Recurrent Neural Networks
Recurrent neural networks are a powerful tool for modeling sequential data, but the dependence of each timestep's computation on the previous timestep's output limits parallelism and makes RNNs unwieldy for very long sequences. We introduce quasi-recurrent neural networks (QRNNs), an approach to neural sequence modeling that alternates convolutional layers, which apply in parallel across timesteps, and a minimalist recurrent pooling function that applies in parallel across channels. Despite lacking trainable recurrent layers, stacked QRNNs have better predictive accuracy than stacked LSTMs of the same hidden size. Due to their increased parallelism, they are up to 16 times faster at train and test time. Experiments on language modeling, sentiment classification, and character-level neural machine translation demonstrate these advantages and underline the viability of QRNNs as a basic building block for a variety of sequence tasks.
http://arxiv.org/pdf/1611.01576
James Bradbury, Stephen Merity, Caiming Xiong, Richard Socher
cs.NE, cs.AI, cs.CL, cs.LG
Submitted to conference track at ICLR 2017
null
cs.NE
20161105
20161121
[ { "id": "1605.07725" }, { "id": "1508.06615" }, { "id": "1606.01305" }, { "id": "1610.10099" }, { "id": "1609.08144" }, { "id": "1602.00367" }, { "id": "1511.08630" }, { "id": "1609.07843" }, { "id": "1608.06993" }, { "id": "1610.03017" }, { "id": "1601.06759" }, { "id": "1606.02960" } ]
1611.01578
8
# 3 METHODS In the following section, we will first describe a simple method of using a recurrent network to generate convolutional architectures. We will show how the recurrent network can be trained with a policy gradient method to maximize the expected accuracy of the sampled architectures. We will present several improvements of our core approach such as forming skip connections to increase model complexity and using a parameter server approach to speed up training. In the last part of 2 # Under review as a conference paper at ICLR 2017 the section, we will focus on generating recurrent architectures, which is another key contribution of our paper. 3.1 GENERATE MODEL DESCRIPTIONS WITH A CONTROLLER RECURRENT NEURAL NETWORK In Neural Architecture Search, we use a controller to generate architectural hyperparameters of neural networks. To be flexible, the controller is implemented as a recurrent neural network. Let’s suppose we would like to predict feedforward neural networks with only convolutional layers, we can use the controller to generate their hyperparameters as a sequence of tokens: Number| | Filter *, lof Filtersf, | Height |, tf f Stride Number Filter Width J, Jof Filters), | Height |, x H A >< Layer N-1 Layer N Layer . . N+1
1611.01578#8
Neural Architecture Search with Reinforcement Learning
Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214.
http://arxiv.org/pdf/1611.01578
Barret Zoph, Quoc V. Le
cs.LG, cs.AI, cs.NE
null
null
cs.LG
20161105
20170215
[ { "id": "1611.01462" }, { "id": "1607.03474" }, { "id": "1603.05027" }, { "id": "1609.09106" }, { "id": "1511.06732" }, { "id": "1508.06615" }, { "id": "1606.04474" }, { "id": "1608.05859" }, { "id": "1609.08144" }, { "id": "1606.01885" }, { "id": "1505.00387" }, { "id": "1609.07843" }, { "id": "1512.05287" }, { "id": "1603.09382" }, { "id": "1608.06993" }, { "id": "1605.07648" } ]
1611.01600
8
The two most popular binarization schemes are BinaryConnect (Courbariaux et al., 2015) and Binary-Weight-Network (BWN) (Rastegari et al., 2016). In BinaryConnect, binarization is per- formed by transforming each element of wt l to −1 or +1 using the sign function:1 l ). (1) Besides the binarized weight matrix, a scaling parameter is also learned in BWN. In other words, Binarize(wt l bt l is binary. They are obtained by minimizing the difference between wt t Ww : aj = fil, bj = sign(w/), (2) where nl is the number of weights in layer l. Hubara et al. (2016) further binarized the activations as ˆxt l is the activation of the lth layer at iteration t. 2.2 PROXIMAL NEWTON ALGORITHM The proximal Newton algorithm (Lee et al., 2014) has been popularly used for solving composite optimization problems of the form min x f (x) + g(x), 1A stochastic binarization scheme is also proposed in (Courbariaux et al., 2015). However, it is much more computational expensive than (1) and so will not be considered here. 2 Published as a conference paper at ICLR 2017
1611.01600#8
Loss-aware Binarization of Deep Networks
Deep neural network models, though very powerful and highly successful, are computationally expensive in terms of space and time. Recently, there have been a number of attempts on binarizing the network weights and activations. This greatly reduces the network size, and replaces the underlying multiplications to additions or even XNOR bit operations. However, existing binarization schemes are based on simple matrix approximation and ignore the effect of binarization on the loss. In this paper, we propose a proximal Newton algorithm with diagonal Hessian approximation that directly minimizes the loss w.r.t. the binarized weights. The underlying proximal step has an efficient closed-form solution, and the second-order information can be efficiently obtained from the second moments already computed by the Adam optimizer. Experiments on both feedforward and recurrent networks show that the proposed loss-aware binarization algorithm outperforms existing binarization schemes, and is also more robust for wide and deep networks.
http://arxiv.org/pdf/1611.01600
Lu Hou, Quanming Yao, James T. Kwok
cs.NE, cs.LG
null
null
cs.NE
20161105
20180510
[ { "id": "1605.04711" }, { "id": "1606.06160" }, { "id": "1502.04390" } ]
1611.01603
8
1Our code and interactive demo are available at: allenai.github.io/bi-att-flow/ 2 Published as a conference paper at ICLR 2017 1. Character Embedding Layer. Character embedding layer is responsible for mapping each word to a high-dimensional vector space. Let {x1, . . . xT } and {q1, . . . qJ } represent the words in the input context paragraph and query, respectively. Following Kim (2014), we obtain the character- level embedding of each word using Convolutional Neural Networks (CNN). Characters are embed- ded into vectors, which can be considered as 1D inputs to the CNN, and whose size is the input channel size of the CNN. The outputs of the CNN are max-pooled over the entire width to obtain a fixed-size vector for each word. 2. Word Embedding Layer. Word embedding layer also maps each word to a high-dimensional vector space. We use pre-trained word vectors, GloVe (Pennington et al., 2014), to obtain the fixed word embedding of each word.
1611.01603#8
Bidirectional Attention Flow for Machine Comprehension
Machine comprehension (MC), answering a query about a given context paragraph, requires modeling complex interactions between the context and the query. Recently, attention mechanisms have been successfully extended to MC. Typically these methods use attention to focus on a small portion of the context and summarize it with a fixed-size vector, couple attentions temporally, and/or often form a uni-directional attention. In this paper we introduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage hierarchical process that represents the context at different levels of granularity and uses bi-directional attention flow mechanism to obtain a query-aware context representation without early summarization. Our experimental evaluations show that our model achieves the state-of-the-art results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze test.
http://arxiv.org/pdf/1611.01603
Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi
cs.CL
Published as a conference paper at ICLR 2017
null
cs.CL
20161105
20180621
[ { "id": "1606.02245" }, { "id": "1608.07905" }, { "id": "1611.01604" }, { "id": "1609.05284" }, { "id": "1610.09996" }, { "id": "1606.01549" }, { "id": "1511.02274" }, { "id": "1505.00387" }, { "id": "1611.01436" }, { "id": "1611.01724" }, { "id": "1607.04423" } ]
1611.01626
8
exploration where the Q-values are decom- posed into advantage function and value function. The algorithm we develop extends actor-critic with a Q-learning style update that, due to the decomposition of the Q-values, resembles the update of the dueling architecture (Wang et al., 2016). Recently, the field of deep reinforcement learning, i.e., the use of deep neural networks to represent action-values or a policy, has seen a lot of success (Mnih et al., 2015; 2016; Silver et al., 2016; Riedmiller, 2005; Lillicrap et al., 2015; Van Hasselt et al., 2016). In the examples section we use a neural network with PGQL to play the Atari games suite.
1611.01626#8
Combining policy gradient and Q-learning
Policy gradient is an efficient technique for improving a policy in a reinforcement learning setting. However, vanilla online variants are on-policy only and not able to take advantage of off-policy data. In this paper we describe a new technique that combines policy gradient with off-policy Q-learning, drawing experience from a replay buffer. This is motivated by making a connection between the fixed points of the regularized policy gradient algorithm and the Q-values. This connection allows us to estimate the Q-values from the action preferences of the policy, to which we apply Q-learning updates. We refer to the new technique as 'PGQL', for policy gradient and Q-learning. We also establish an equivalency between action-value fitting techniques and actor-critic algorithms, showing that regularized policy gradient techniques can be interpreted as advantage function learning algorithms. We conclude with some numerical examples that demonstrate improved data efficiency and stability of PGQL. In particular, we tested PGQL on the full suite of Atari games and achieved performance exceeding that of both asynchronous advantage actor-critic (A3C) and Q-learning.
http://arxiv.org/pdf/1611.01626
Brendan O'Donoghue, Remi Munos, Koray Kavukcuoglu, Volodymyr Mnih
cs.LG, cs.AI, math.OC, stat.ML
null
null
cs.LG
20161105
20170407
[ { "id": "1602.01783" }, { "id": "1509.02971" }, { "id": "1609.00150" }, { "id": "1512.04105" }, { "id": "1511.05952" }, { "id": "1504.00702" } ]
1611.01673
8
Published as a conference paper at ICLR 2017 G FO) ee VO,.6) (0,6) Or D, Figure 1: (GMAN) The generator trains using feedback aggregated over multiple discriminators. If F := max, G trains against the best discriminator. If F := mean, G trains against an ensemble. We explore other alternatives to F' in Sections 4.1 & 4.4 that improve on both these options. 3.1 MAXIMIZING V(D,G) For a fixed G, maximizing F¢(V;) with F := max and N randomly instantiated copies of our dis- criminator is functionally equivalent to optimizing V (e.g., stochastic gradient ascent) with random restarts in parallel and then presenting maxjef1,___,.v} V (Dj, G) as the loss to the generator —a very pragmatic approach to the difficulties presented by the non-convexity of V caused by the deep net. Requiring the generator to minimize the max forces G to generate high fidelity samples that must hold up under the scrutiny of all V discriminators, each potentially representing a distinct max.
1611.01673#8
Generative Multi-Adversarial Networks
Generative adversarial networks (GANs) are a framework for producing a generative model by way of a two-player minimax game. In this paper, we propose the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that extends GANs to multiple discriminators. In previous work, the successful training of GANs requires modifying the minimax objective to accelerate training early on. In contrast, GMAN can be reliably trained with the original, untampered objective. We explore a number of design perspectives with the discriminator role ranging from formidable adversary to forgiving teacher. Image generation tasks comparing the proposed framework to standard GANs demonstrate GMAN produces higher quality samples in a fraction of the iterations when measured by a pairwise GAM-type metric.
http://arxiv.org/pdf/1611.01673
Ishan Durugkar, Ian Gemp, Sridhar Mahadevan
cs.LG, cs.MA, cs.NE
Accepted as a conference paper (poster) at ICLR 2017
null
cs.LG
20161105
20170302
[ { "id": "1511.06390" }, { "id": "1511.05897" }, { "id": "1610.02920" } ]
1611.01576
9
Or the recurrence relation may include an independent input and forget gate: ce =f0q-1. +h) On (5) hy = 0; © Cy. We term these three options f -pooling, fo-pooling, and ifo-pooling respectively; in each case we initialize h or c to zero. Although the recurrent parts of these functions must be calculated for each timestep in sequence, their simplicity and parallelism along feature dimensions means that, in practice, evaluating them over even long sequences requires a negligible amount of computation time. A single QRNN layer thus performs an input-dependent pooling, followed by a gated linear combi- nation of convolutional features. As with convolutional neural networks, two or more QRNN layers should be stacked to create a model with the capacity to approximate more complex functions. 2.1 VARIANTS Motivated by several common natural language tasks, and the long history of work on related ar- chitectures, we introduce several extensions to the stacked QRNN described above. Notably, many extensions to both recurrent and convolutional models can be applied directly to the QRNN as it combines elements of both model types. Regularization An important extension to the stacked QRNN is a robust regularization scheme inspired by recent work in regularizing LSTMs.
1611.01576#9
Quasi-Recurrent Neural Networks
Recurrent neural networks are a powerful tool for modeling sequential data, but the dependence of each timestep's computation on the previous timestep's output limits parallelism and makes RNNs unwieldy for very long sequences. We introduce quasi-recurrent neural networks (QRNNs), an approach to neural sequence modeling that alternates convolutional layers, which apply in parallel across timesteps, and a minimalist recurrent pooling function that applies in parallel across channels. Despite lacking trainable recurrent layers, stacked QRNNs have better predictive accuracy than stacked LSTMs of the same hidden size. Due to their increased parallelism, they are up to 16 times faster at train and test time. Experiments on language modeling, sentiment classification, and character-level neural machine translation demonstrate these advantages and underline the viability of QRNNs as a basic building block for a variety of sequence tasks.
http://arxiv.org/pdf/1611.01576
James Bradbury, Stephen Merity, Caiming Xiong, Richard Socher
cs.NE, cs.AI, cs.CL, cs.LG
Submitted to conference track at ICLR 2017
null
cs.NE
20161105
20161121
[ { "id": "1605.07725" }, { "id": "1508.06615" }, { "id": "1606.01305" }, { "id": "1610.10099" }, { "id": "1609.08144" }, { "id": "1602.00367" }, { "id": "1511.08630" }, { "id": "1609.07843" }, { "id": "1608.06993" }, { "id": "1610.03017" }, { "id": "1601.06759" }, { "id": "1606.02960" } ]
1611.01578
9
. . N+1 Figure 2: How our controller recurrent neural network samples a simple convolutional network. It predicts filter height, filter width, stride height, stride width, and number of filters for one layer and repeats. Every prediction is carried out by a softmax classifier and then fed into the next time step as input. In our experiments, the process of generating an architecture stops if the number of layers exceeds a certain value. This value follows a schedule where we increase it as training progresses. Once the controller RNN finishes generating an architecture, a neural network with this architecture is built and trained. At convergence, the accuracy of the network on a held-out validation set is recorded. The parameters of the controller RNN, θc, are then optimized in order to maximize the expected validation accuracy of the proposed architectures. In the next section, we will describe a policy gradient method which we use to update parameters θc so that the controller RNN generates better architectures over time. # 3.2 TRAINING WITH REINFORCE
1611.01578#9
Neural Architecture Search with Reinforcement Learning
Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214.
http://arxiv.org/pdf/1611.01578
Barret Zoph, Quoc V. Le
cs.LG, cs.AI, cs.NE
null
null
cs.LG
20161105
20170215
[ { "id": "1611.01462" }, { "id": "1607.03474" }, { "id": "1603.05027" }, { "id": "1609.09106" }, { "id": "1511.06732" }, { "id": "1508.06615" }, { "id": "1606.04474" }, { "id": "1608.05859" }, { "id": "1609.08144" }, { "id": "1606.01885" }, { "id": "1505.00387" }, { "id": "1609.07843" }, { "id": "1512.05287" }, { "id": "1603.09382" }, { "id": "1608.06993" }, { "id": "1605.07648" } ]
1611.01600
9
2 Published as a conference paper at ICLR 2017 where f is convex and smooth, and g is convex but possibly nonsmooth. At iteration t, it generates the next iterate as Xi1 = arg min Vf (x1) " (x —x;) + (x — x) H(x — x:) + g(x), where H is an approximate Hessian matrix of f at xt. With the use of second-order information, the proximal Newton algorithm converges faster than the proximal gradient algorithm (Lee et al., 2014). Recently, by assuming that f and g have difference-of-convex decompositions (Yuille & Rangarajan, 2002), the proximal Newton algorithm is also extended to the case where g is nonconvex (Rakotomamonjy et al., 2016). # 3 LOSS-AWARE BINARIZATION
1611.01600#9
Loss-aware Binarization of Deep Networks
Deep neural network models, though very powerful and highly successful, are computationally expensive in terms of space and time. Recently, there have been a number of attempts on binarizing the network weights and activations. This greatly reduces the network size, and replaces the underlying multiplications to additions or even XNOR bit operations. However, existing binarization schemes are based on simple matrix approximation and ignore the effect of binarization on the loss. In this paper, we propose a proximal Newton algorithm with diagonal Hessian approximation that directly minimizes the loss w.r.t. the binarized weights. The underlying proximal step has an efficient closed-form solution, and the second-order information can be efficiently obtained from the second moments already computed by the Adam optimizer. Experiments on both feedforward and recurrent networks show that the proposed loss-aware binarization algorithm outperforms existing binarization schemes, and is also more robust for wide and deep networks.
http://arxiv.org/pdf/1611.01600
Lu Hou, Quanming Yao, James T. Kwok
cs.NE, cs.LG
null
null
cs.NE
20161105
20180510
[ { "id": "1605.04711" }, { "id": "1606.06160" }, { "id": "1502.04390" } ]