doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1701.07274 | 47 | Algorithm 4 presents the pseudo code for TD(O) with function approximation. 6(s, w) is the ap- proximate value function, w is the value function weight vector, Vi(s, w) is the gradient of the approximate value function with respect to the weight vector, and the weight vector is updated fol- lowing the update rule, w < w + alr + yi(sâ, w) â 6(s, w)|VG(s, w).
11
Input: the policy 7 to be evaluated Input: a differentiable value function 6(s, w), 6(terminal, -) = 0 Output: value function i(s, w) initialize value function weight w arbitrarily, e.g., w = 0 for each episode do initialize state s for each step of episode, state s is not terminal do a<7(-|s) take action a, observe r, sâ we w+talr + 7i(sâ, w) â o(s, w)]VO(s, w) ses! end end
Algorithm 4: TD(0) with function approximation, adapted from Sutton and Barto (2018) | 1701.07274#47 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 48 | Algorithm 4: TD(0) with function approximation, adapted from Sutton and Barto (2018)
When combining off-policy, function approximation, and bootstrapping, instability and divergence may occur (Tsitsiklis and Van Roy, 1997), which is called the deadly triad issue (Sutton and Barto, 2018). All these three elements are necessary: function approximation for scalability and gener- alization, bootstrapping for computational and data efï¬ciency, and off-policy learning for freeing behaviour policy from target policy. What is the root cause for the instability? Learning or sampling are not, since dynamic programming suffers from divergence with function approximation; explo- ration, greediï¬cation, or control are not, since prediction alone can diverge; local minima or complex non-linear function approximation are not, since linear function approximation can produce instabil- ity (Sutton, 2016). It is unclear what is the root cause for instability â each single factor mentioned above is not â there are still many open problems in off-policy learning (Sutton and Barto, 2018).
Table 1 presents various algorithms that tackle various issues (Sutton, 2016). Deep RL algorithms like Deep Q-Network (Mnih et al., 2015) and A3C (Mnih et al., 2016) are not presented here, since they do not have theoretical guarantee, although they achieve stunning performance empirically. | 1701.07274#48 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 49 | Before explaining Table 1, we introduce some background definitions. Recall that Bellman equa- tion for value function is v,(s) = >, m(als) >, p(sâ, r|s,a)[r + yu,-(sâ)]. Bellman operator is defined as (B,v)(s) = 0, Tals) >, p(sâ, 71s, @)[r + Yux(sâ)]. TD fix point is then vu, = B,,v;. Bellman error for the function approximation case is then )>, 7(a|s) >>, p(sâ, r|s,a)[r + 6 (sâ, w)] â 6"(s, w), the right side of Bellman equation with function approximation minus the left side. It can be written as Bu» â Uw. Bellman error is the expectation of the TD error. | 1701.07274#49 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 50 | ADP algorithms refer to dynamic programming algorithms like policy evaluation, policy iteration, and value iteration, with function approximation. Least square temporal difference (LSTD) (Bradtke and Barto, 1996) computes TD ï¬x-point directly in batch mode. LSTD is data efï¬cient, yet with squared time complexity. LSPE (Nedi´c and Bertsekas, 2003) extended LSTD. Fitted-Q algo- rithms (Ernst et al., 2005; Riedmiller, 2005) learn action values in batch mode. Residual gradi- ent algorithms (Baird, 1995) minimize Bellman error. Gradient-TD (Sutton et al., 2009a;b; Mah- mood et al., 2014) methods are true gradient algorithms, perform SGD in the projected Bellman error (PBE), converge robustly under off-policy training and non-linear function approximation. Emphatic-TD (Sutton et al., 2016) emphasizes some updates and de-emphasizes others by reweight- ing, improving computational efï¬ciency, yet being a semi-gradient method. See Sutton and Barto (2018) for | 1701.07274#50 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 52 | 2.3.8 POLICY OPTIMIZATION
In contrast to value-based methods like TD learning and Q-learning, policy-based methods opti- mize the policy Ï(a|s; θ) (with function approximation) directly, and update the parameters θ by gradient ascent on E[Rt]. REINFORCE (Williams, 1992) is a policy gradient method, updating θ in the direction of âθ log Ï(at|st; θ)Rt. Usually a baseline bt(st) is subtracted from the return to reduce the variance of gradient estimate, yet keeping its unbiasedness, to yield the gradient direction âθ log Ï(at|st; θ)(Rt â bt(st)). Using V (st) as the baseline bt(st), we have the advantage func12
algorithm TD) LSTD() Residual | GTD(X) SARSA(A) | ADP | LSPE(A) | Fitted-Q | Gradient | GQ(A) linear computation v v v v nonlinear convergent v v v 3 off-policy 2 | convergent v v v model-free, online v v v v converges to PBE=0 v v v v v
Table 1: RL Issues vs. Algorithms | 1701.07274#52 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 53 | Table 1: RL Issues vs. Algorithms
tion A(at, st) = Q(at, st) â V (st), since Rt is an estimate of Q(at, st). Algorithm 5 presents the pseudo code for REINFORCE algorithm in the episodic case.
Input: policy Ï(a|s, θ), Ëv(s, w) Parameters: step sizes, α > 0, β > 0 Output: policy Ï(a|s, θ) initialize policy parameter θ and state-value weights w for true do generate an episode s0, a0, r1, · · · , sT â1, aT â1, rT , following Ï(·|·, θ) for each step t of episode 0, · · · , T â 1 do Gt â return from step t δ â Gt â Ëv(st, w) w â w + βδâw Ëv(st, w) θ â θ + αγtδâθlogÏ(at|st, θ) end
# end
Algorithm 5: REINFORCE with baseline (episodic), adapted from Sutton and Barto (2018) | 1701.07274#53 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 54 | # end
Algorithm 5: REINFORCE with baseline (episodic), adapted from Sutton and Barto (2018)
In actor-critic algorithms, the critic updates action-value function parameters, and the actor updates policy parameters, in the direction suggested by the critic. Algorithm 6 presents the pseudo code for one-step actor-critic algorithm in the episodic case.
Input: policy 7(a|s, 4), 6(s, w) Parameters: step sizes, a > 0, 6 > 0 Output: policy 7(a|s, 6) initialize policy parameter @ and state-value weights w for true do initialize s, the first state of the episode I¢l for s is not terminal do aw~ 7(-|s, 0) take action a, observe sâ, r 56+ r+-76(s', w) â 6(s, w) (if sâ is terminal, 6(sâ, w) = 0) w<âwt BdVwi(s:, w) 6 < 864+ aldVologr(a;|s:, 8) Teal ses! end
# end
Algorithm 6: Actor-Critic (episodic), adapted from Sutton and Barto (2018)
13 | 1701.07274#54 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 55 | # end
Algorithm 6: Actor-Critic (episodic), adapted from Sutton and Barto (2018)
13
Policy iteration alternates between policy evaluation and policy improvement, to generate a sequence of improving policies. In policy evaluation, the value function of the current policy is estimated from the outcomes of sampled trajectories. In policy improvement, the current value function is used to generate a better policy, e.g., by selecting actions greedily with respect to the value function.
2.3.9 DEEP REINFORCEMENT LEARNING | 1701.07274#55 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 56 | We obtain deep reinforcement learning (deep RL) methods when we use deep neural networks to approximate any of the following components of reinforcement learning: value function, Ëv(s; θ) or Ëq(s, a; θ), policy Ï(a|s; θ), and model (state transition function and reward function). Here, the parameters θ are the weights in deep neural networks. When we use âshallowâ models, like linear function, decision trees, tile coding and so on as the function approximator, we obtain âshallowâ RL, and the parameters θ are the weight parameters in these models. Note, a shallow model, e.g., decision trees, may be non-linear. The distinct difference between deep RL and âshallowâ RL is what function approximator is used. This is similar to the difference between deep learning and âshallowâ machine learning. We usually utilize stochastic gradient descent to update weight parameters in deep RL. When off-policy, function approximation, in particular, non-linear function approximation, and bootstrapping are combined together, instability and divergence may occur (Tsitsiklis and Van Roy, 1997). However, recent work like Deep Q-Network (Mnih et al., 2015) and AlphaGo (Silver et al., 2016a) stabilized the learning and achieved outstanding results. | 1701.07274#56 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 57 | 2.3.10 RL PARLANCE
We explain some terms in RL parlance.
The prediction problem, or policy evaluation, is to compute the state or action value function for a policy. The control problem is to ï¬nd the optimal policy. Planning constructs a value function or a policy with a model.
On-policy methods evaluate or improve the behavioural policy, e.g., SARSA ï¬ts the action-value function to the current policy, i.e., SARSA evaluates the policy based on samples from the same policy, then reï¬nes the policy greedily with respect to action values. In off-policy methods, an agent learns an optimal value function/policy, maybe following an unrelated behavioural policy, e.g., Q- learning attempts to ï¬nd action values for the optimal policy directly, not necessarily ï¬tting to the policy generating the data, i.e., the policy Q-learning obtains is usually different from the policy that generates the samples. The notion of on-policy and off-policy can be understood as same-policy and different-policy.
The exploration-exploitation dilemma is about the agent needs to exploit the currently best action to maximize rewards greedily, yet it has to explore the environment to ï¬nd better actions, when the policy is not optimal yet, or the system is non-stationary. | 1701.07274#57 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 58 | In model-free methods, the agent learns with trail-and-error from experience explicitly; the model (state transition function) is not known or learned from experience. RL methods that use models are model-based methods.
In online mode, training algorithms are executed on data acquired in sequence. In ofï¬ine mode, or batch mode, models are trained on the entire data set.
With bootstrapping, an estimate of state or action value is updated from subsequent estimates.
2.3.11 BRIEF SUMMARY | 1701.07274#58 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 59 | With bootstrapping, an estimate of state or action value is updated from subsequent estimates.
2.3.11 BRIEF SUMMARY
A RL problem is formulated as an MDP when the observation about the environment satisï¬es the Markov property. An MDP is deï¬ned by the 5-tuple (S, A, P, R, γ). A central concept in RL is value function. Bellman equations are cornerstone for developing RL algorithms. Temporal difference learning algorithms are fundamental for evaluating/predicting value functions. Control algorithms ï¬nd optimal policies. Reinforcement learning algorithms may be based on value func- tion and/or policy, model-free or model-based, on-policy or off-policy, with function approximation or not, with sample backups (TD and Monte Carlo) or full backups (dynamic programming and exhaustive search), and about the depth of backups, either one-step return (TD(0) and dynamic pro- gramming) or multi-step return (TD(λ), Monte Carlo, and exhaustive search). When combining
14 | 1701.07274#59 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 60 | 14
off-policy, function approximation, and bootstrapping, we face instability and divergence (Tsitsiklis and Van Roy, 1997), the deadly triad issue (Sutton and Barto, 2018). Theoretical guarantee has been established for linear function approximation, e.g., Gradient-TD (Sutton et al., 2009a;b; Mahmood et al., 2014), Emphatic-TD (Sutton et al., 2016) and Du et al. (2017). With non-linear function ap- proximation, in particular deep learning, algorithms like Deep Q-Network (Mnih et al., 2015) and AlphaGo (Silver et al., 2016a; 2017) stabilized the learning and achieved stunning results, which is the focus of this overview.
# 3 CORE ELEMENTS | 1701.07274#60 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 61 | # 3 CORE ELEMENTS
A RL agent executes a sequence of actions and observe states and rewards, with major components of value function, policy and model. A RL problem may be formulated as a prediction, control or planning problem, and solution methods may be model-free or model-based, with value function and/or policy. Exploration-exploitation is a fundamental tradeoff in RL. Knowledge would be crit- ical for RL. In this section, we discuss core RL elements: value function in Section 3.1, policy in Section 3.2, reward in Section 3.3, model and planning in Section 3.4, exploration in Section 3.5, and knowledge in Section 3.6.
3.1 VALUE FUNCTION
Value function is a fundamental concept in reinforcement learning, and temporal difference (TD) learning (Sutton, 1988) and its extension, Q-learning (Watkins and Dayan, 1992), are classical algo- rithms for learning state and action value functions respectively. In the following, we focus on Deep Q-Network (Mnih et al., 2015), a recent breakthrough, and its extensions.
3.1.1 DEEP Q-NETWORK (DQN) AND EXTENSIONS | 1701.07274#61 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 63 | Input: the pixels and the game score Output: Q action value function (from which we obtain policy and select action) Initialize replay memory D Initialize action-value function Q with random weight 0 Initialize target action-value function Q with weights 0~ = 0 for episode = 1 to M do Initialize sequence s; = {x1} and preprocessed sequence ¢ = $(s1) for t= / toT do a random action with probability ⬠arg max, Q($(s:),a;@) otherwise Execute action a; in emulator and observe reward r; and image 2141 Set si41 = St, 1, 2441 and preprocess $441 = $(8141) Store transition (;, a2, 7, d¢41) in D // experience replay Sample random minibatch of transitions (¢,,a;,7j;,6j+1) from D Set y, < £7 if episode terminates at step 7 + 1 er ys = rj +ymaxg Q(oj41,0;0-) otherwise Perform a gradient descent step on (y; â Q(d;,a;;0))â w.rt. the network parameter 0 // periodic update of target network Every C steps reset Q = Q, i.e., set 0~ = 0 end Following e-greedy policy, select a, = {
# end | 1701.07274#63 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 64 | # end
Algorithm 7: Deep Q-Nework (DQN), adapted from Mnih et al. (2015)
Before DQN, it is well known that RL is unstable or even divergent when action value function is approximated with a nonlinear function like neural networks. DQN made several important contri- butions: 1) stabilize the training of action value function approximation with deep neural networks
15
(CNN) using experience replay (Lin, 1992) and target network; 2) designing an end-to-end RL ap- proach, with only the pixels and the game score as inputs, so that only minimal domain knowledge is required; 3) training a ï¬exible network with the same algorithm, network architecture and hyper- parameters to perform well on many different tasks, i.e., 49 Atari games (Bellemare et al., 2013), and outperforming previous algorithms and performing comparably to a human professional tester.
See Chapter 16 in Sutton and Barto (2018) for a detailed and intuitive description of Deep Q- Network. See Deepmindâs description of DQN at https://deepmind.com/research/dqn/.
DOUBLE DQN | 1701.07274#64 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 65 | DOUBLE DQN
van Hasselt et al. (2016a) proposed Double DQN (D-DQN) to tackle the over-estimate problem in Q-learning. In standard Q-learning, as well as in DQN, the parameters are updated as follows:
θt+1 = θt + α(yQ t â Q(st, at; θt))âθtQ(st, at; θt), yQ t = rt+1 + γ max Q(st+1, a; θt),
where
a so that the max operator uses the same values to both select and evaluate an action. As a conse- quence, it is more likely to select over-estimated values, and results in over-optimistic value esti- mates. van Hasselt et al. (2016a) proposed to evaluate the greedy policy according to the online network, but to use the target network to estimate its value. This can be achieved with a minor change to the DQN algorithm, replacing yQ
yDâDQN t = rt+1 + γQ(st+1, arg max Q(st+1, at; θt); θâ t ),
a where θt is the parameter for online network and θâ t reference, yQ | 1701.07274#65 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 66 | a where θt is the parameter for online network and θâ t reference, yQ
t can be written as is the parameter for target network. For
yQ t = rt+1 + γQ(st+1, arg max a Q(st+1, at; θt); θt).
D-DQN found better policies than DQN on Atari games.
PRIORITIZED EXPERIENCE REPLAY
In DQN, experience transitions are uniformly sampled from the replay memory, regardless of the signiï¬cance of experiences. Schaul et al. (2016) proposed to prioritize experience replay, so that important experience transitions can be replayed more frequently, to learn more efï¬ciently. The importance of experience transitions are measured by TD errors. The authors designed a stochastic prioritization based on the TD errors, using importance sampling to avoid the bias in the update distribution. The authors used prioritized experience replay in DQN and D-DQN, and improved their performance on Atari games.
DUELING ARCHITECTURE | 1701.07274#66 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 67 | DUELING ARCHITECTURE
Wang et al. (2016b) proposed the dueling network architecture to estimate state value function V (s) and associated advantage function A(s, a), and then combine them to estimate action value function Q(s, a), to converge faster than Q-learning. In DQN, a CNN layer is followed by a fully connected (FC) layer. In dueling architecture, a CNN layer is followed by two streams of FC layers, to estimate value function and advantage function separately; then the two streams are combined to estimate action value function. Usually we use the following to combine V (s) and A(s, a) to obtain Q(s, a),
Q(s,a;0, a, 8) = V(s;0, 8) + (A(s, a;0,a) â max
A(s,aâ;@,a))
where α and β are parameters of the two streams of FC layers. Wang et al. (2016b) proposed to replace max operator with average as the following for better stability,
Q(s,4;6,0, 8) = V(s:8, 8) + (Als, a; 8,0) â pa Alesa's8.0)) | 1701.07274#67 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 68 | Dueling architecture implemented with D-DQN and prioritized experience replay improved previous work, DQN and D-DQN with prioritized experience replay, on Atari games.
16
DISTRIBUTIONAL VALUE FUNCTION
Bellemare et al. (2017)
RAINBOW
Hessel et al. (2018)
MORE DQN EXTENSIONS
DQN has been receiving much attention. We list several extensions/improvements here.
⢠Anschel et al. (2017) proposed to reduce variability and instability by an average of previ- ous Q-values estimates.
⢠He et al. (2017) proposed to accelerate DQN by optimality tightening, a constrained opti- mization approach, to propagate reward faster, and to improve accuracy over DQN.
⢠Liang et al. (2016) attempted to understand the success of DQN and reproduced results with shallow RL.
⢠OâDonoghue et al. (2017) proposed policy gradient and Q-learning (PGQ), as discussed in Section 3.2.3.
⢠Oh et al. (2015) proposed spatio-temporal video prediction conditioned on actions and previous video frames with deep neural networks in Atari games. | 1701.07274#68 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 69 | ⢠Oh et al. (2015) proposed spatio-temporal video prediction conditioned on actions and previous video frames with deep neural networks in Atari games.
Osband et al. (2016) designed better exploration strategy to improve DQN. ⢠Hester et al. (2018) proposed to learn from demonstration with new loss functions, as discussed in Section 4.2.
3.2 POLICY
A policy maps state to action, and policy optimization is to ï¬nd an optimal mapping. As in Peters and Neumann (2015), the spectrum from direct policy search to value-based RL includes: evo- lutionary strategies, CMA-ES (covariance matrix adaptation evolution strategy), episodic REPS (relative entropy policy search), policy gradients, PILCO (probabilistic inference for learning con- trol) (Deisenroth and Rasmussen, 2011), model-based REPS, policy search by trajectory optimiza- tion, actor critic, natural actor critic, eNAC (episodic natural actor critic), advantage weighted re- gression, conservative policy iteration, LSPI (least square policy iteration) (Lagoudakis and Parr, 2003), Q-learning, and ï¬tted Q, as well as important extensions, contextual policy search, and hier- archical policy search. | 1701.07274#69 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 70 | We discuss actor-critic (Mnih et al., 2016). Then we discuss policy gradient, including deterministic policy gradient (Silver et al., 2014; Lillicrap et al., 2016), trust region policy optimization (Schulman et al., 2015), and, benchmark results (Duan et al., 2016). Next we discuss the combination of policy gradient and off-policy RL (OâDonoghue et al., 2017; Nachum et al., 2017; Gu et al., 2017).
See Retrace algorithm (Munos et al., 2016), a safe and efï¬cient return-based off-policy control algorithm, and its actor-critic extension, Reactor (Gruslys et al., 2017), for Retrace-actor. See dis- tributed proximal policy optimization (Heess et al., 2017). McAllister and Rasmussen (2017) ex- tended PILCO to POMDPs.
3.2.1 ACTOR-CRITIC | 1701.07274#70 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 71 | 3.2.1 ACTOR-CRITIC
An actor-critic algorithm learns both a policy and a state-value function, and the value function is used for bootstrapping, i.e., updating a state from subsequent estimates, to reduce variance and accelerate learning (Sutton and Barto, 2018). In the following, we focus on asynchronous advantage actor-critic (A3C) (Mnih et al., 2016). Mnih et al. (2016) also discussed asynchronous one-step SARSA, one-step Q-learning and n-step Q-learning.
In A3C, parallel actors employ different exploration policies to stabilize training, so that experience replay is not utilized. Different from most deep learning algorithms, asynchronous methods can run on a single multi-core CPU. For Atari games, A3C ran much faster yet performed better than
17
or comparably with DQN, Gorila (Nair et al., 2015), D-DQN, Dueling D-DQN, and Prioritized D-DQN. A3C also succeeded on continuous motor control problems: TORCS car racing games and MujoCo physics manipulation and locomotion, and Labyrinth, a navigating task in random 3D mazes using visual inputs, in which an agent will face a new maze in each new episode, so that it needs to learn a general strategy to explore random mazes. | 1701.07274#71 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 72 | Global shared parameter vectors @ and 6,,, thread-specific parameter vectors 6â and 6â, Global shared counter T = 0, Tinax Initialize step counter t + 1 for T < Tirnrax do Reset gradients, d@ + 0 and d6,, + 0. Synchronize thread-specific parameters 0â = 6 and 6/, = 6,, Set tstart = t, get state s; for s; not terminal and t â tstart < tmax dO Take a, according to policy (a;|51; 0â) Receive reward r, and new state 5,44 tet+1T¢+T+1 end R 0 for terminal s, V(s1,0/,) otherwise for i ⬠{tâ1,...,tstare} do Rerj+yR accumulate gradients wrt 6â: dO <~â d@ + Vg log m(a;|s;; 0â)(R â V(s;; 6,)) accumulate gradients wrt 6/,: d0,, â dO, + Ver (R â V(s;; 6)? end
end Update asynchronously θ using dθ, and θv using dθv
# end
Algorithm 8: A3C, each actor-learner thread, based on Mnih et al. (2016) | 1701.07274#72 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 73 | # end
Algorithm 8: A3C, each actor-learner thread, based on Mnih et al. (2016)
We present pseudo code for asynchronous advantage actor-critic for each actor-learner thread in Algorithm 8. A3C maintains a policy 7(a;,|s:;@) and an estimate of the value function V(s;0,), being updated with n-step returns in the forward view, after every tmax actions or reaching a terminal state, similar to using minibatches. The gradient update can be seen as Vo log r(az| 81; 8â) A(sz, a1; 0, 0,), where A(s;,a1;0,0,) = an y'rigi + VV (St4K; Ov) â V(s¢; 0.) is an estimate of the advantage function, with k upbounded by tmaz. | 1701.07274#73 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 74 | Wang et al. (2017b) proposed a stable and sample efï¬cient actor-critic deep RL model using experi- ence replay, with truncated importance sampling, stochastic dueling network (Wang et al., 2016b) as discussed in Section 3.1.1, and trust region policy optimization (Schulman et al., 2015) as discussed in Section 3.2.2. Babaeizadeh et al. (2017) proposed a hybrid CPU/GPU implementation of A3C.
3.2.2 POLICY GRADIENT
REINFORCE (Williams, 1992; Sutton et al., 2000) is a popular policy gradient method. Relatively speaking, Q-learning as discussed in Section 3.1 is sample efï¬cient, while policy gradient is stable.
DETERMINISTIC POLICY GRADIENT
Policies are usually stochastic. However, Silver et al. (2014) and Lillicrap et al. (2016) proposed deterministic policy gradient (DPG) for efï¬cient estimation of policy gradients. | 1701.07274#74 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 75 | Silver et al. (2014) introduced the deterministic policy gradient (DPG) algorithm for RL problems with continuous action spaces. The deterministic policy gradient is the expected gradient of the action-value function, which integrates over the state space; whereas in the stochastic case, the pol- icy gradient integrates over both state and action spaces. Consequently, the deterministic policy gradient can be estimated more efï¬ciently than the stochastic policy gradient. The authors intro- duced an off-policy actor-critic algorithm to learn a deterministic target policy from an exploratory behaviour policy, and to ensure unbiased policy gradient with the compatible function approxima- tion for deterministic policy gradients. Empirical results showed its superior to stochastic policy
18
gradients, in particular in high dimensional tasks, on several problems: a high-dimensional bandit; standard benchmark RL tasks of mountain car and pendulum and 2D puddle world with low dimen- sional action spaces; and controlling an octopus arm with a high-dimensional action space. The experiments were conducted with tile-coding and linear function approximators. | 1701.07274#75 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 76 | Lillicrap et al. (2016) proposed an actor-critic, model-free, deep deterministic policy gradient (DDPG) algorithm in continuous action spaces, by extending DQN (Mnih et al., 2015) and DPG (Sil- ver et al., 2014). With actor-critic as in DPG, DDPG avoids the optimization of action at every time step to obtain a greedy policy as in Q-learning, which will make it infeasible in complex action spaces with large, unconstrained function approximators like deep neural networks. To make the learning stable and robust, similar to DQN, DDPQ deploys experience replay and an idea similar to target network, âsoftâ target, which, rather than copying the weights directly as in DQN, updates the soft target network weights 0â slowly to track the learned networks weights 0: 0â â 76+ (1â7)6â, with 7 < 1. The authors adapted batch normalization to handle the issue that the different com- ponents of the observation with different physical units. As an off-policy algorithm, DDPG learns an actor policy from experiences from an exploration policy by adding noise sampled from a noise process | 1701.07274#76 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 77 | with different physical units. As an off-policy algorithm, DDPG learns an actor policy from experiences from an exploration policy by adding noise sampled from a noise process to the actor policy. More than 20 simulated physics tasks of varying difficulty in the Mu- JoCo environment were solved with the same learning algorithm, network architecture and hyper- parameters, and obtained policies with performance competitive with those found by a planning algorithm with full access to the underlying physical model and its derivatives. DDPG can solve problems with 20 times fewer steps of experience than DQN, although it still needs a large number of training episodes to find solutions, as in most model-free RL methods. It is end-to-end, with raw pixels as input. DDPQ paper also contains links to videos for illustration. | 1701.07274#77 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 79 | TRUST REGION POLICY OPTIMIZATION
Schulman et al. (2015) introduced an iterative procedure to monotonically improve policies theoreti- cally, guaranteed by optimizing a surrogate objective function. The authors then proposed a practical algorithm, Trust Region Policy Optimization (TRPO), by making several approximations, includ- ing, introducing a trust region constraint, deï¬ned by the KL divergence between the new policy and the old policy, so that at every point in the state space, the KL divergence is bounded; approximat- ing the trust region constraint by the average KL divergence constraint; replacing the expectations and Q value in the optimization problem by sample estimates, with two variants: in the single path approach, individual trajectories are sampled; in the vine approach, a rollout set is constructed and multiple actions are performed from each state in the rollout set; and, solving the constrained opti- mization problem approximately to update the policyâs parameter vector. The authors also uniï¬ed policy iteration and policy gradient with analysis, and showed that policy iteration, policy gradient, and natural policy gradient (Kakade, 2002) are special cases of TRPO. In the experiments, TRPO methods performed well on simulated robotic tasks of swimming, hopping, and walking, as well as playing Atari games in an end-to-end manner directly from raw images. | 1701.07274#79 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 80 | Wu et al. (2017) proposed scalable TRPO with Kronecker-factored approximation to the curvature.
# https://blog.openai.com/openai-baselines-ppo/
BENCHMARK RESULTS
Duan et al. (2016) presented a benchmark for continuous control tasks, including classic tasks like cart-pole, tasks with very large state and action spaces such as 3D humanoid locomotion and tasks with partial observations, and tasks with hierarchical structure, implemented various algorithms, including batch algorithms: REINFORCE, Truncated Natural Policy Gradient (TNPG), Reward- Weighted Regression (RWR), Relative Entropy Policy Search (REPS), Trust Region Policy Opti- mization (TRPO), Cross Entropy Method (CEM), Covariance Matrix Adaption Evolution Strategy (CMA-ES); online algorithms: Deep Deterministic Policy Gradient (DDPG); and recurrent variants of batch algorithms. The open source is available at: https://github.com/rllab/rllab.
Duan et al. (2016) compared various algorithms, and showed that DDPG, TRPO, and Truncated Nat- ural Policy Gradient (TNPG) (Schulman et al., 2015) are effective in training deep neural network policies, yet better algorithms are called for hierarchical tasks.
19
Islam et al. (2017) | 1701.07274#80 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 81 | 19
Islam et al. (2017)
Tassa et al. (2018)
3.2.3 COMBINING POLICY GRADIENT WITH OFF-POLICY RL
OâDonoghue et al. (2017) proposed to combine policy gradient with off-policy Q-learning (PGQ), to beneï¬t from experience replay. Usually actor-critic methods are on-policy. The authors also showed that action value ï¬tting techniques and actor-critic methods are equivalent, and interpreted regularized policy gradient techniques as advantage function learning algorithms. Empirically, the authors showed that PGQ outperformed DQN and A3C on Atari games.
Nachum et al. (2017) introduced the notion of softmax temporal consistency, to generalize the hard- max Bellman consistency as in off-policy Q-learning, and in contrast to the average consistency as in on-policy SARSA and actor-critic. The authors established the correspondence and a mutual compatibility property between softmax consistent action values and the optimal policy maximizing entropy regularized expected discounted reward. The authors proposed Path Consistency Learning, attempting to bridge the gap between value and policy based RL, by exploiting multi-step path-wise consistency on traces from both on and off policies. | 1701.07274#81 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 82 | Gu et al. (2017) proposed Q-Prop to take advantage of the stability of policy gradients and the sample efï¬ciency of off-policy RL. Schulman et al. (2017) showed the equivalence between entropy- regularized Q-learning and policy gradient.
Gu et al. (2017)
3.3 REWARD
Rewards provide evaluative feedbacks for a RL agent to make decisions. Rewards may be sparse so that it is challenging for learning algorithms, e.g., in computer Go, a reward occurs at the end of a game. There are unsupervised ways to harness environmental signals, see Section 4.2. Reward function is a mathematical formulation for rewards. Reward shaping is to modify reward function to facilitate learning while maintaining optimal policy. Reward functions may not be available for some RL problems, which is the focus of this section. | 1701.07274#82 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 83 | In imitation learning, an agent learns to perform a task from expert demonstrations, with samples of trajectories from the expert, without reinforcement signal, without additional data from the expert while training; two main approaches for imitation learning are behavioral cloning and inverse rein- forcement learning. Behavioral cloning, or apprenticeship learning, or learning from demonstration, is formulated as a supervised learning problem to map state-action pairs from expert trajectories to policy, without learning the reward function (Ho et al., 2016; Ho and Ermon, 2016). Inverse reinforcement learning (IRL) is the problem of determining a reward function given observations of optimal behaviour (Ng and Russell, 2000). Abbeel and Ng (2004) approached apprenticeship learning via IRL.
In the following, we discuss learning from demonstration (Hester et al., 2018), and imitation learning with generative adversarial networks (GANs) (Ho and Ermon, 2016; Stadie et al., 2017). We will discuss GANs, a recent unsupervised learning framework, in Section 4.2.3. | 1701.07274#83 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 84 | Su et al. (2016b) proposed to train dialogue policy jointly with reward model. Christiano et al. (2017) proposed to learn reward function by human preferences from comparisons of trajectory segments. See also Hadï¬eld-Menell et al. (2016); Merel et al. (2017); Wang et al. (2017); van Seijen et al. (2017).
Amin et al. (2017)
LEARNING FROM DEMONSTRATION
Hester et al. (2018) proposed Deep Q-learning from Demonstrations (DQfD) to attempt to accel- erate learning by leveraging demonstration data, using a combination of temporal difference (TD), supervised, and regularized losses. In DQfQ, reward signal is not available for demonstration data; however, it is available in Q-learning. The supervised large margin classiï¬cation loss enables the policy derived from the learned value function to imitate the demonstrator; the TD loss enables the
20 | 1701.07274#84 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 85 | 20
validity of value function according to the Bellman equation and its further use for learning with RL; the regularization loss function on network weights and biases prevents overï¬tting on small demonstration dataset. In the pre-training phase, DQfD trains only on demonstration data, to obtain a policy imitating the demonstrator and a value function for continual RL learning. After that, DQfD self-generates samples, and mixes them with demonstration data according to certain proportion to obtain training data. The authors showed that, on Atari games, DQfD in general has better initial performance, more average rewards, and learns faster than DQN.
In AlphaGo (Silver et al., 2016a), to be discussed in Section 5.1.1, the supervised learning policy network is learned from expert moves as learning from demonstration; the results initialize the RL policy network. See also Kim et al. (2014); P´erez-DâArpino and Shah (2017). See Argall et al. (2009) for a survey of robot learning from demonstration.
VeËcer´ık et al. (2017)
GENERATIVE ADVERSARIAL IMITATION LEARNING
With IRL, an agent learns a reward function ï¬rst, then from which derives an optimal policy. Many IRL algorithms have high time complexity, with a RL problem in the inner loop. | 1701.07274#85 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 86 | Ho and Ermon (2016) proposed generative adversarial imitation learning algorithm to learn poli- cies directly from data, bypassing the intermediate IRL step. Generative adversarial training was deployed to ï¬t the discriminator, the distribution of states and actions that deï¬nes expert behavior, and the generator, the policy.
Generative adversarial imitation learning ï¬nds a policy Ïθ so that a discriminator DR can not dis- tinguish states following the expert policy ÏE and states following the imitator policy Ïθ, hence forcing DR to take 0.5 in all cases and Ïθ not distinguishable from ÏE in the equillibrium. Such a game is formulated as:
max Ïθ min DR âEÏθ [log DR(s)] â EÏE [log(1 â DR(s))] | 1701.07274#86 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 87 | max Ïθ min DR âEÏθ [log DR(s)] â EÏE [log(1 â DR(s))]
The authors represented both Ïθ and DR as deep neural networks, and found an optimal solution by repeatedly performing gradient updates on each of them. DR can be trained with supervised learning with a data set formed from traces from a current Ïθ and expert traces. For a ï¬xed DR, an optimal Ïθ is sought. Hence it is a policy optimization problem, with â log DR(s) as the reward. The authors trained Ïθ by trust region policy optimization (Schulman et al., 2015).
Li et al. (2017)
THIRD PERSON IMITATION LEARNING
Stadie et al. (2017) argued that previous works in imitation learning, like Ho and Ermon (2016) and Finn et al. (2016b), have the limitation of ï¬rst person demonstrations, and proposed to learn from unsupervised third person demonstration, mimicking human learning by observing other humans achieving goals.
3.4 MODEL AND PLANNING | 1701.07274#87 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 88 | 3.4 MODEL AND PLANNING
A model is an agentâs representation of the environment, including the transition model and the reward model. Usually we assume the reward model is known. We discuss how to handle unknown reward models in Section 3.3. Model-free RL approaches handle unknown dynamical systems, however, they usually require large number of samples, which may be costly or prohibitive to obtain for real physical systems. Model-based RL approaches learn value function and/or policy in a data- efï¬cient way, however, they may suffer from the issue of model identiï¬cation so that the estimated models may not be accurate, and the performance is limited by the estimated model. Planning constructs a value function or a policy usually with a model, so that planning is usually related to model-based RL methods.
Chebotar et al. (2017) attempted to combine the advantages of both model-free and model-based RL approaches. The authors focused on time-varying linear-Gaussian policies, and integrated a model21 | 1701.07274#88 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 89 | based linear quadratic regulator (LQR) algorithm with a model-free path integral policy improve- ment algorithm. To generalize the method for arbitrary parameterized policies such as deep neural networks, the authors combined the proposed approach with guided policy search (GPS) (Levine et al., 2016a). The proposed approach does not generate synthetic samples with estimated models to avoid degradation from modelling errors. See recent work on model-based learning, e.g., Gu et al. (2016b); Henaff et al. (2017); Hester and Stone (2017); Oh et al. (2017); Watter et al. (2015). | 1701.07274#89 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 90 | Tamar et al. (2016) introduced Value Iteration Networks (VIN), a fully differentiable CNN plan- ning module to approximate the value iteration algorithm, to learn to plan, e.g, policies in RL. In contrast to conventional planning, VIN is model-free, where reward and transition probability are part of the neural network to be learned, so that it may avoid issues with system identiï¬cation. VIN can be trained end-to-end with backpropagation. VIN can generalize in a diverse set of tasks: sim- ple gridworlds, Mars Rover Navigation, continuous control and WebNav Challenge for Wikipedia links navigation (Nogueira and Cho, 2016). One merit of Value Iteration Network, as well as Du- eling Network(Wang et al., 2016b), is that they design novel deep neural networks architectures for reinforcement learning problems. See a blog about VIN at https://github.com/karpathy/paper- notes/blob/master/vin.md.
Silver et al. (2016b) proposed the predictron to integrate learning and planning into one end-to-end training procedure with raw input in Markov reward process, which can be regarded as Markov decision process without actions. See classical Dyna-Q (Sutton, 1990).
Weber et al. (2017)
Andrychowicz et al. (2017) | 1701.07274#90 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 91 | Weber et al. (2017)
Andrychowicz et al. (2017)
3.5 EXPLORATION
A RL agent usually uses exploration to reduce its uncertainty about the reward function and tran- sition probabilities of the environment. In tabular cases, this uncertainty can be quantiï¬ed as con- ï¬dence intervals or posterior of environment parameters, which are related to the state-action visit counts. With count-based exploration, a RL agent uses visit counts to guide its behaviour to re- duce uncertainty. However, count-based methods are not directly useful in large domains. Intrinsic motivation suggests to explore what is surprising, typically in learning process based on change in prediction error. Intrinsic motivation methods do not require Markov property and tabular repre- sentation as count-based methods require. Bellemare et al. (2016) proposed pseudo-count, a density model over the state space, to unify count-based exploration and intrinsic motivation, by introducing information gain, to relate to conï¬dence intervals in count-based exploration, and to relate to learn- ing progress in intrinsic motivation. The author established pseudo-countâs theoretical advantage over previous intrinsic motivation methods, and validated it with Atari games. | 1701.07274#91 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 92 | Nachum et al. (2017) proposed an under-appreciated reward exploration technique to avoid the pre- vious ineffective, undirected exploration strategies of the reward landscape, as in e-greedy and en- tropy regularization, and to promote directed exploration of the regions, in which the log-probability of an action sequence under the current policy under-estimates the resulting reward. The under- appreciated reward exploration strategy resulted from importance sampling from the optimal policy, and combined a mode seeking and a mean seeking terms to tradeoff exploration and exploitation. The authors implemented the proposed exploration strategy with minor modifications to REIN- FORCE, and validated it, for the first time with a RL method, on several algorithmic tasks.
Osband et al. (2016) proposed bootstrapped DQN to combine deep exploration with deep neural networks to achieve efï¬cient learning. Houthooft et al. (2016) proposed variational information maximizing exploration for continuous state and action spaces. Fortunato et al. (2017) proposed NoisyNet for efï¬cient exploration by adding parametric noise added to weights of deep neural net- works. See also Azar et al. (2017); Jiang et al. (2016); Ostrovski et al. (2017).
Tang et al. (2017)
Fu et al. (2017)
22
3.6 KNOWLEDGE | 1701.07274#92 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 93 | Tang et al. (2017)
Fu et al. (2017)
22
3.6 KNOWLEDGE
(This section would be an open-ended discussion.)
Knowledge would be critical for further development of RL. Knowledge may be incorporated into RL in various ways, through value, reward, policy, model, exploration strategy, etc. During a per- sonal conversation with Rich Sutton, he mentioned that it is still wide open how to incorporate knowledge into RL.
human intelligence, Lake et al. (2016), developmental start-up software â intuitive physics, intu- itive psychology; learning as rapid model building â compositionality, causality; learning to learn; thinking fast â approximate inference in structured models, model-based and model-free reinforce- ment learning
consciousness prior, Bengio (2017)
# ML with knowledge, Song and Roth (2017)
causality, Pearl (2018), Johansson et al. (2016)
interpretability, Zhang and Zhu (2018) surveyed visual interpretability for deep learning, Dong et al. (2017)
George et al. (2017)
Yang and Mitchell (2017)
# IMPORTANT MECHANISMS | 1701.07274#93 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 94 | George et al. (2017)
Yang and Mitchell (2017)
# IMPORTANT MECHANISMS
In this section, we discuss important mechanisms for the development of (deep) reinforcement learn- ing, including attention and memory, unsupervised learning, transfer learning, multi-agent reinforce- ment learning, hierarchical RL, and learning to learn. We note that we do not discuss in detail some important mechanisms, like Bayesian RL (Ghavamzadeh et al., 2015), POMDP (Hausknecht and Stone, 2015), and semi-supervised RL (Audiffren et al., 2015; Finn et al., 2017; Zhu and Goldberg, 2009).
4.1 ATTENTION AND MEMORY
Attention is a mechanism to focus on the salient parts. Memory provides data storage for long time, and attention is an approach for memory addressing. | 1701.07274#94 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 95 | Graves et al. (2016) proposed differentiable neural computer (DNC), in which, a neural network can read from and write to an external memory, so that DNC can solve complex, structured prob- lems, which a neural network without read-write memory can not solve. DNC minimizes memory allocation interference and enables long-term storage. Similar to a conventional computer, in a DNC, the neural network is the controller and the external memory is the random-access memory; and a DNC represents and manipulates complex data structures with the memory. Differently, a DNC learns such representation and manipulation end-to-end with gradient descent from data in a goal-directed manner. When trained with supervised learning, a DNC can solve synthetic question answering problems, for reasoning and inference in natural language; it can solve the shortest path ï¬nding problem between two stops in transportation networks and the relationship inference prob- lem in a family tree. When trained with reinforcement learning, a DNC can solve a moving blocks puzzle with changing goals speciï¬ed by symbol sequences. DNC outperformed normal neural net- work like LSTM or DNCâs precursor Neural | 1701.07274#95 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 96 | goals speciï¬ed by symbol sequences. DNC outperformed normal neural net- work like LSTM or DNCâs precursor Neural Turing Machine (Graves et al., 2014); with harder problems, an LSTM may simply fail. Although these experiments are relatively small-scale, we expect to see further improvements and applications of DNC. See Deepmindâs description of DNC at https://deepmind.com/blog/differentiable-neural-computers/. | 1701.07274#96 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 97 | Mnih et al. (2014) applied attention to image classiï¬cation and object detection. Xu et al. (2015) integrated attention to image captioning. We brieï¬y discuss application of attention in computer vision in Section 5.4. The attention mechanism is also deployed in NLP, e.g., in Bahdanau et al. (2015; 2017), and with external memory, in differentiable neural computer (Graves et al., 2016) as discussed above. Most works follow a soft attention mechanism (Bahdanau et al., 2015), a weighted
23
addressing scheme to all memory locations. There are endeavours for hard attention (Gulcehre et al., 2016; Liang et al., 2017a; Luo et al., 2016; Xu et al., 2015; Zaremba and Sutskever, 2015), which is the way conventional computers access memory. | 1701.07274#97 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 98 | See recent work on attention and/or memory, e.g., Ba et al. (2014; 2016); Chen et al. (2016b); Danihelka et al. (2016); Duan et al. (2017); Eslami et al. (2016); Gregor et al. (2015); Jader- berg et al. (2015); Kaiser and Bengio (2016); Kadlec et al. (2016); Luo et al. (2016); Oh et al. (2016); Oquab et al. (2015); Vaswani et al. (2017); Weston et al. (2015); Sukhbaatar et al. (2015); Yang et al. (2015); Zagoruyko and Komodakis (2017); Zaremba and Sutskever (2015). See http://distill.pub/2016/augmented-rnns/ and http://www.wildml.com/2016/01/attention- and-memory-in-deep-learning-and-nlp/ for blogs about attention and memory.
4.2 UNSUPERVISED LEARNING | 1701.07274#98 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 99 | 4.2 UNSUPERVISED LEARNING
Unsupervised learning is a way to take advantage of the massive amount of data, and would be a crit- ical mechanism to achieve general artiï¬cial intelligence. Unsupervised learning is categorized into non-probabilistic models, like sparse coding, autoencoders, k-means etc, and probabilistic (gen- erative) models, where density functions are concerned, either explicitly or implicitly (Salakhut- dinov, 2016). Among probabilistic (generative) models with explicit density functions, some are with tractable models, like fully observable belief nets, neural autoregressive distribution estima- tors, and PixelRNN, etc; some are with non-tractable models, like Botlzmann machines, variational autoencoders, Helmhotz machines, etc. For probabilistic (generative) models with implicit density functions, we have generative adversarial networks, moment matching networks, etc. | 1701.07274#99 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 100 | In the following, we discuss Horde (Sutton et al., 2011), and unsupervised auxiliary learning (Jader- berg et al., 2017), two ways to take advantages of possible non-reward training signals in environ- ments. We also discuss generative adversarial networks (Goodfellow et al., 2014). See also Le et al. (2012), Chen et al. (2016), Liu et al. (2017).
Artetxe et al. (2017)
# 4.2.1 HORDE
Sutton et al. (2011) proposed to represent knowledge with general value function, where policy, termination function, reward function, and terminal reward function are parameters. The authors then proposed Horde, a scalable real-time architecture for learning in parallel general value functions for independent sub-agents from unsupervised sensorimotor interaction, i.e., nonreward signals and observations. Horde can learn to predict the values of many sensors, and policies to maximize those sensor values, with general value functions, and answer predictive or goal-oriented questions. Horde is off-policy, i.e., it learns in real-time while following some other behaviour policy, and learns with gradient-based temporal difference learning methods, with constant time and memory complexity per time step.
4.2.2 UNSUPERVISED AUXILIARY LEARNING | 1701.07274#100 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 102 | Jaderberg et al. (2017) proposed UNsupervised REinforcement and Auxiliary Learning (UNREAL) to improve learning efï¬ciency by maximizing pseudo-reward functions, besides the usual cumulative reward, while sharing a common representation. UNREAL is composed of RNN-LSTM base agent, pixel control, reward prediction, and value function replay. The base agent is trained on-policy with A3C (Mnih et al., 2016). Experiences of observations, rewards and actions are stored in a reply buffer, for being used by auxiliary tasks. The auxiliary policies use the base CNN and LSTM, together with a deconvolutional network, to maximize changes in pixel intensity of different regions of the input images. The reward prediction module predicts short-term extrinsic reward in next frame by observing the last three frames, to tackle the issue of reward sparsity. Value function replay further trains the value function. UNREAL improved A3Câs performance on Atari games, and performed well on 3D Labyrinth game. UNREAL has a shared representation among signals,
24
while Horde trains each value function separately with distinct weights. See Deepmindâs description of UNREAL at https://deepmind.com/blog/reinforcement-learning-unsupervised-auxiliary-tasks/. | 1701.07274#102 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 103 | We discuss robotics navigation with similar unsupervised auxiliary learning (Mirowski et al., 2017) in Section 5.2. See also Lample and Chaplot (2017).
4.2.3 GENERATIVE ADVERSARIAL NETWORKS
Goodfellow et al. (2014) proposed generative adversarial nets (GANs) to estimate generative models via an adversarial process by training two models simultaneously, a generative model G to capture the data distribution, and a discriminative model D to estimate the probability that a sample comes from the training data but not the generative model G.
Goodfellow et al. (2014) modelled G and D with multilayer perceptrons: G(z : θg) and D(x : θd), where θg and θd are parameters, x are data points, and z are input noise variables. Deï¬ne a prior on input noise variable pz(z). G is a differentiable function and D(x) outputs a scalar as the probability that x comes from the training data rather than pg, the generative distribution we want to learn. | 1701.07274#103 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 104 | D will be trained to maximize the probability of assigning labels correctly to samples from both training data and G. Simultaneously, G will be trained to minimize such classiï¬cation accuracy, log(1 â D(G(z))). As a result, D and G form the two-player minimax game as follows:
min G max D Exâ¼pdata(x)[log D(x)] + Ezâ¼pz(z)[log(1 â D(G(z)))]
Goodfellow et al. (2014) showed that as G and D are given enough capacity, generative adversarial nets can recover the data generating distribution, and provided a training algorithm with backpropa- gation by minibatch stochastic gradient descent.
See Goodfellow (2017) for Ian Goodfellowâs summary of his NIPS 2016 Tutorial on GANs. GANs have received much attention and many works have been appearing after the tutorial. | 1701.07274#104 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 105 | GANs are notoriously hard to train. See Arjovsky et al. (2017) for Wasserstein GAN (WGAN) as a stable GANs model. Gulrajani et al. (2017) proposed to improve stability of WGAN by penalizing the norm of the gradient of the discriminator with respect to its input, instead of clipping weights as in Arjovsky et al. (2017). Mao et al. (2016) proposed Least Squares GANs (LSGANs), another sta- ble model. Berthelot et al. (2017) proposed BEGAN to improve WGAN by an equilibrium enforcing model, and set a new milestone in visual quality for image generation. Bellemare et al. (2017) pro- posed Cram´er GAN to satisfy three machine learning properties of probability divergences: sum invariance, scale sensitivity, and unbiased sample gradients. Hu et al. (2017) uniï¬ed GANs and Variational Autoencoders (VAEs). | 1701.07274#105 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 106 | We discuss imitation learning with GANs in Section 3.3, including generative adversarial imitation learning, and third person imitation learning. Finn et al. (2016a) established a connection between GANs, inverse RL, and energy-based models. Pfau and Vinyals (2016) established the connection between GANs and actor-critic algorithms. See an answer on Quora, http://bit.ly/2sgtpx8, by Prof Sridhar Mahadevan.
4.3 TRANSFER LEARNING
Transfer learning is about transferring knowledge learned from different domains, possibly with different feature spaces and/or different data distributions (Taylor and Stone, 2009; Pan and Yang, 2010; Weiss et al., 2016). As reviewed in Pan and Yang (2010), transfer learning can be inductive, transductive, or unsupervised; inductive transfer learning includes self-taught learning and multi- task learning; and transductive transfer learning includes domain adaptation and sample selection bias/covariance shift.
Bousmalis et al. (2017)
https://research.googleblog.com/2017/10/closing-simulation-to-reality-gap-for.html
Gupta et al. (2017a) formulated the multi-skill problem for two agents to learn multiple skills, de- ï¬ned the common representation using which to map states and to project the execution of skills, and
25 | 1701.07274#106 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 107 | 25
designed an algorithm for two agents to transfer the informative feature space maximally to trans- fer new skills, with similarity loss metric, autoencoder, and reinforcement learning. The authors validated their proposed approach with two simulated robotic manipulation tasks.
See also recent work in transfer learning e.g., Andreas et al. (2017); Dong et al. (2015); Ganin et al. (2016); Kaiser et al. (2017a); Kansky et al. (2017); Long et al. (2015; 2016); Maurer et al. (2016); Mo et al. (2016); Parisotto et al. (2016); Papernot et al. (2017); P´erez-DâArpino and Shah (2017); Rajendran et al. (2017); Whye Teh et al. (2017); Yosinski et al. (2014). See Ruder (2017) for an overview about multi-task learning. See NIPS 2015 Transfer and Multi-Task Learning: Trends and New Perspectives Workshop.
Long et al. (2017)
Killian et al. (2017)
# Barreto et al. (2017)
McCann et al. (2017)
4.4 MULTI-AGENT REINFORCEMENT LEARNING | 1701.07274#107 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 108 | # Barreto et al. (2017)
McCann et al. (2017)
4.4 MULTI-AGENT REINFORCEMENT LEARNING
Multi-agent RL (MARL) is the integration of multi-agent systems (Shoham and Leyton-Brown, 2009; Stone and Veloso, 2000) with RL, thus it is at the intersection of game theory (Leyton-Brown and Shoham, 2008) and RL/AI communities. Besides issues in RL like convergence and curse-of- dimensionality, there are new issues like multiple equilibria, and even fundamental issues like what is the question for multi-agent learning, whether convergence to an equilibrium is an appropriate goal, etc. Consequently, multi-agent learning is challenging both technically and conceptually, and demands clear understanding of the problem to be solved, the criteria for evaluation, and coherent research agendas (Shoham et al., 2007).
Multi-agent systems have many applications, e.g., as we will discuss, games in Section 5.1, robotics in Section 5.2, Smart Grid in Section 5.10, Intelligent Transportation Systems in Section 5.11, and compute systems in Section 5.12. | 1701.07274#108 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 109 | Busoniu et al. (2008) surveyed works in multi-agent RL. There are several recent works, about new deep MARL algorithms (Foerster et al., 2018; Foerster et al., 2017; Lowe et al., 2017; Omidshaï¬ei et al., 2017), new communication mechanisms in MARL (Foerster et al., 2016; Sukhbaatar et al., 2016), and sequential social dilemmas with MARL (Leibo et al., 2017).
# Bansal et al. (2017)
Al-Shedivat et al. (2017a)
Ghavamzadeh et al. (2006)
Foerster et al. (2017)
Perolat et al. (2017)
Lanctot et al. (2017)
Hadï¬eld-Menell et al. (2016)
Hadï¬eld-Menell et al. (2017)
Mhamdi et al. (2017)
Lowe et al. (2017)
Hoshen (2017)
4.5 HIERARCHICAL REINFORCEMENT LEARNING | 1701.07274#109 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 110 | Mhamdi et al. (2017)
Lowe et al. (2017)
Hoshen (2017)
4.5 HIERARCHICAL REINFORCEMENT LEARNING
Hierarchical RL is a way to learn, plan, and represent knowledge with spatio-temporal abstraction at multiple levels. Hierarchical RL is an approach for issues of sparse rewards and/or long hori- zons (Sutton et al., 1999; Dietterich, 2000; Barto and Mahadevan, 2003).
26
Vezhnevets et al. (2016) proposed strategic attentive writer (STRAW), a deep recurrent neural net- work architecture, for learning high-level temporally abstracted macro-actions in an end-to-end man- ner based on observations from the environment. Macro-actions are sequences of actions commonly occurring. STRAW builds a multi-step action plan, updated periodically based on observing re- wards, and learns for how long to commit to the plan by following it without replanning. STRAW learns to discover macro-actions automatically from data, in contrast to the manual approach in pre- vious work. Vezhnevets et al. (2016) validated STRAW on next character prediction in text, 2D maze navigation, and Atari games. | 1701.07274#110 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 111 | Kulkarni et al. (2016) proposed hierarchical-DQN (h-DQN) by organizing goal-driven intrinsically motivated deep RL modules hierarchically to work at different time-scales. h-DQN integrates a top level action value function and a lower level action value function; the former learns a policy over intrinsic sub-goals, or options (Sutton et al., 1999); the latter learns a policy over raw actions In a hard Atari game, Montezumaâs Revenge, h-DQN outperformed to satisfy given sub-goals. previous methods, including DQN and A3C.
Florensa et al. (2017) proposed to pre-train a large span of skills using Stochastic Neural Networks with an information-theoretic regularizer, then on top of these skills, to train high-level policies for downstream tasks. Pre-training is based on a proxy reward signal, which is a form of intrinsic motivation to explore agentâs own capabilities; its design requires minimal domain knowledge about the downstream tasks. Their method combined hierarchical methods with intrinsic motivation, and the pre-training follows an unsupervised way.
Tessler et al. (2017) proposed a hierarchical deep RL network architecture for lifelong learning. Reusable skills, or sub-goals, are learned to transfer knowledge to new tasks. The authors tested their approach on the game of Minecraft. | 1701.07274#111 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 112 | See also Bacon et al. (2017), Kompella et al. (2017), Machado et al. (2017), Peng et al. (2017a), Schaul et al. (2015), Sharma et al. (2017), Vezhnevets et al. (2017), Yao et al. (2014). See a survey on hierarchical RL (Barto and Mahadevan, 2003).
Harutyunyan et al. (2018)
4.6 LEARNING TO LEARN
Learning to learn, also know as meta-learning, is about learning to adapt rapidly to new tasks. It is related to transfer learning, multi-task learning, representation learning, and one/few/zero-shot learning. We can also see hyper-parameter learning and neural architecture design as learning to learn. It is a core ingredient to achieve strong AI (Lake et al., 2016).
hypermarameter tuning, e.g., Jaderberg et al. (2017)
Sutton (1992)
4.6.1 LEARNING TO LEARN/OPTIMIZE
Li and Malik (2017) proposed to automate unconstrained continuous optimization algorithms with guided policy search (Levine et al., 2016a) by representing a particular optimization algorithm as a policy, and convergence rate as reward. See also Andrychowicz et al. (2016). | 1701.07274#112 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 113 | Duan et al. (2016) and Wang et al. (2016) proposed to learn a ï¬exible RNN model to handle a family of RL tasks, to improve sample efï¬ciency, learn new tasks in a few samples, and beneï¬t from prior knowledge.
combinatorial optimization, e.g., Vinyals et al. (2015), Bello et al. (2016), Dai et al. (2017)
Xu et al. (2017)
Smith et al. (2017)
Li and Malik (2017)
27
4.6.2 ZERO/ONE/FEW-SHOT LEARNING | 1701.07274#113 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 114 | Xu et al. (2017)
Smith et al. (2017)
Li and Malik (2017)
27
4.6.2 ZERO/ONE/FEW-SHOT LEARNING
Lake et al. (2015) proposed an one-shot concept learning model, for handwritten characters in par- ticular, with probabilistic program induction. Koch et al. (2015) proposed siamese neural networks with metric learning for one-shot image recognition. Vinyals et al. (2016) designed matching net- works for one-shot classiï¬cation. Duan et al. (2017) proposed a model for one-shot imitation learn- ing with attention for robotics. Ravi and Larochelle (2017) proposed a meta-learning model for few shot learning. Johnson et al. (2016) presented zero-shot translation for Googleâs multilingual neural machine translation system. Kaiser et al. (2017b) designed a large scale memory module for life- long one-shot learning to remember rare events. Kansky et al. (2017) proposed Schema Networks for zero-shot transfer with a generative causal model of intuitive physics. Snell et al. (2017) pro- posed prototypical networks for few/zero-shot classiï¬cation by learning a metric space to compute distances to prototype representations of each class.
4.6.3 NEURAL ARCHITECTURE DESIGN | 1701.07274#114 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 115 | 4.6.3 NEURAL ARCHITECTURE DESIGN
Neural networks architecture design is a notorious, nontrivial engineering issue. Neural architecture search provides a promising avenue to explore.
Zoph and Le (2017) proposed the neural architecture search to generate neural networks architec- tures with an RNN trained by RL, in particular, REINFORCE, searching from scratch in variable- length architecture space, to maximize the expected accuracy of the generated architectures on a validation set. In the RL formulation, a controller generates hyperparameters as a sequence of to- kens, which are actions chosen from hyperparameters spaces; each gradient update to the policy parameters corresponds to training one generated network to convergence; an accuracy on a valida- tion set is the reward signal. The neural architecture search can generate convolutional layers, with skip connections or branching layers, and recurrent cell architecture. The authors designed a param- eter server approach to speed up training. Comparing with state-of-the-art methods, the proposed approach achieved competitive results for an image classiï¬cation task with CIFAR-10 dataset; and better results for a language modeling task with Penn Treebank. | 1701.07274#115 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 116 | Zoph et al. (2017) proposed to transfer the architectural building block learned with the neural archi- tecture search (Zoph and Le, 2017) on small dataset to large dataset for scalable image recognition. Baker et al. (2017) proposed a meta-learning approach, using Q-learning with e-greedy exploration and experience replay, to generate CNN architectures automatically for a given learning task. Zhong et al. (2017) proposed to construct network blocks to reduce the search space of network design, trained by Q-learning. See also Bello et al. (2017). | 1701.07274#116 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 117 | There are recent works exploring new neural architectures. Kaiser et al. (2017a) proposed to train a single model, MultiModel, which is composed of convolutional layers, an attention mechanism, and sparsely-gated layers, to learn multiple tasks from various domains, including image classiï¬cation, image captioning and machine translation. Vaswani et al. (2017) proposed a new achichitecture for translation that replaces CNN and RNN with attention and positional encoding. Wang et al. (2016b) proposed the dueling network architecture to estimate state value function and associated advantage function, to combine them to estimate action value function for faster convergence. Tamar et al. (2016) introduced Value Iteration Networks, a fully differentiable CNN planning module to approx- imate the value iteration algorithm, to learn to plan. Silver et al. (2016b) proposed the predictron to integrate learning and planning into one end-to-end training procedure with raw input in Markov reward process.
Liu et al. (2017)
Liu et al. (2017)
# 5 APPLICATIONS | 1701.07274#117 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 118 | Liu et al. (2017)
Liu et al. (2017)
# 5 APPLICATIONS
Reinforcement learning has a wide range of applications. We discuss games in Section 5.1 and robotics in Section 5.2, two classical RL application areas. Games will still be important testbeds for RL/AI. Robotics will be critical in the era of AI. Next we discuss natural language processing in Section 5.3, which enjoys wide and deep applications of RL recently. Computer vision follows in Section 5.4, in which, there are efforts for integration of vision and language. Combinatorial
28
image/video resource mgmt dynamic recognition adaptive performance treatment localization traffic optimization regimes detection signal security phenotype tracking control privacy inference manufacturing autonomous Hen computer computer activities and sh ITS sri ae healthcare teacher processes educational management games Industry education 4.0 deep reinforcement learning smart business grid management adaptive 7 ads control games robotics NLP finance recommender customer management go localization chatbot pricing poker mapping translation trading bridge navigation QA portfolio Starcraft control sentiment optimization
Figure 2: Deep RL Applications | 1701.07274#118 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 119 | Figure 2: Deep RL Applications
optimization including neural architecture design in Section ?? is an exciting application of RL. In Section 5.5, we discuss business management, like ads, recommendation, customer management, and marketing. We discuss ï¬nance in Section 5.6. Business and ï¬nance have natural problems for RL. We discuss healthcare in Section 5.7, which receives much attention recently, esp. after the success of deep learning. We discuss Industry 4.0 in Section 5.9. Many countries have made plans to integrate AI with manufacturing. We discuss smart grid in Section 5.10, intelligent transportation systems in Section 5.11, and computer systems in Section 5.12. There are optimization and control problems in these areas, and many of them are concerned with networking and graphs. These appli- cation areas may overlap with each other, e.g., a robot may need skills for many of the application areas. We present deep RL applications brieï¬y in Figure 2.
RL is usually for sequential decision making problems. However, some problems, seemingly non- sequential on surface, like machine translation and neural network architecture design, have been approached by RL. RL applications abound; and creativity would be the boundary. | 1701.07274#119 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 120 | Reinforcement learning is widely used in operations research (Powell, 2011), e.g., supply chain, inventory management, resource management, etc; we do not list it as an application area â it is implicitly a component in application areas like intelligent transportation system and Industry 4.0. We do not list smart city, an important application area of AI, as it includes several application areas here: healthcare, intelligent transportation system, smart grid, etc. We do not discuss some interesting applications, like music generation (Briot et al., 2017; Jaques et al., 2017), and retrosyn- thesis (Segler et al., 2017). See previous work on lists of RL applications at: http://bit.ly/2pDEs1Q, and http://bit.ly/2rjsmaz. We may only touch the surface of some application areas. It is desirable to do a deeper analysis of all application areas listed in the following, which we leave as a future work.
29
# 5.1 GAMES | 1701.07274#120 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 121 | 29
# 5.1 GAMES
Games provide excellent testbeds for RL/AI algorithms. We discuss Deep Q-Network (DQN) in Section 3.1.1 and its extensions, all of which experimented with Atari games. We discuss Mnih et al. (2016) in Section 3.2.1, Jaderberg et al. (2017) in Section 4.2, and Mirowski et al. (2017) in Section 5.2, and they used Labyrinth as the testbed. See Yannakakis and Togelius (2018) for a book on artiï¬cial intelligence and games. We discuss multi-agent RL in Section 4.4, which is at the intersection of game theory and RL/AI.
In Section 5.1.1, we dis- Backgammon and computer Go are perfect information board games. cuss brieï¬y Backgammon, and focus on computer Go, in particular, AlphaGo. Variants of card games, including majiang/mahjong, are imperfect information board games, which we discuss in Section 5.1.2, and focus on Texas Holdâem Poker. In video games, information may be perfect or imperfect, and game theory may be deployed or not. We discuss video games in Section 5.1.3. We will see more achievements in imperfect information games and video games, and their applications. | 1701.07274#121 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 122 | # 5.1.1 PERFECT INFORMATION BOARD GAMES
Board games like Backgammon, Go, chess, checker and Othello, are classical testbeds for RL/AI al- gorithms. In such games, players reveal prefect information. Tesauro (1994) approached Backgam- mon by using neural networks to approximate value function learned with TD learning, and achieved human level performance. We focus on computer Go, in particular, AlphaGo (Silver et al., 2016a; 2017), for its signiï¬cance.
COMPUTER GO
The challenge of solving Computer Go comes from not only the gigantic search space of size 250150, an astronomical number, but also the hardness of position evaluation (M¨uller, 2002), which was successfully used in solving many other games, like Backgammon and chess. | 1701.07274#122 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 123 | AlphaGo (Silver et al., 2016a), a computer Go program, won the human European Go champion, 5 games to 0, in October 2015, and became the ï¬rst computer Go program to won a human profes- sional Go player without handicaps on a full-sized 19 à 19 board. Soon after that in March 2016, AlphaGo defeated Lee Sedol, an 18-time world champion Go player, 4 games to 1, making headline news worldwide. This set a landmark in AI. AlphaGo defeated Ke Jie 3:0 in May 2017. AlphaGo Zero (Silver et al., 2017) further improved previous versions by learning a superhuman computer Go program without human knowledge.
ALPHAGO: TRAINING PIPELINE AND MCTS
We discuss brieï¬y how AlphaGo works based on Silver et al. (2016a) and Sutton and Barto (2018). See Sutton and Barto (2018) for a detailed and intuitive description of AlphaGo. See Deepmindâs description of AlphaGo at goo.gl/lZoQ1d. | 1701.07274#123 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 124 | AlphaGo was built with techniques of deep convolutional neural networks, supervised learning, reinforcement learning, and Monte Carlo tree search (MCTS) (Browne et al., 2012; Gelly and Silver, 2007; Gelly et al., 2012). AlphaGo is composed of two phases: neural network training pipeline and MCTS. The training pipeline phase includes training a supervised learning (SL) policy network from expert moves, a fast rollout policy, a RL policy network, and a RL value network.
The SL policy network has convolutional layers, ReLU nonlinearities, and an output softmax layer representing probability distribution over legal moves. The inputs to the CNN are 19 Ã 19 Ã 48 image stacks, where 19 is the dimension of a Go board and 48 is the number of features. State- action pairs are sampled from expert moves to train the network with stochastic gradient ascent to maximize the likelihood of the move selected in a given state. The fast rollout policy uses a linear softmax with small pattern features.
The RL policy network improves SL policy network, with the same network architecture, and the weights of SL policy network as initial weights, and policy gradient for training. The reward function is +1 for winning and -1 for losing in the terminal states, and 0 otherwise. Games are played between the current policy network and a random, previous iteration of the policy network, to stabilize the
30 | 1701.07274#124 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 125 | 30
learning and to avoid overï¬tting. Weights are updated by stochastic gradient ascent to maximize the expected outcome.
The RL value network still has the same network architecture as SL policy network, except the out- put is a single scalar predicting the value of a position. The value network is learned in a Monte Carlo policy evaluation approach. To tackle the overï¬tting problem caused by strongly correlated successive positions in games, data are generated by self-play between the RL policy network and itself until game termination. The weights are trained by regression on state-outcome pairs, us- ing stochastic gradient descent to minimize the mean squared error between the prediction and the corresponding outcome.
In MCTS phase, AlphaGo selects moves by lookahead search. It builds a partial game tree starting from the current state, in the following stages: 1) select a promising node to explore further, 2) expand a leaf node guided by the SL policy network and collected statistics, 3) evaluate a leaf node with a mixture of the RL value network and the rollout policy, 4) backup evaluations to update the action values. A move is then selected.
# ALPHAGO ZERO | 1701.07274#125 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 126 | # ALPHAGO ZERO
AlphaGo Zero can be understood as an approximation policy iteration, incorporating MCTS inside the training loop to perform both policy improvement and policy evaluation. MCTS may be regarded as a policy improvement operator. It outputs move probabilities stronger than raw probabilities of the neural network. Self-play with search may be regarded as a policy evaluation operator. It uses MCTS to select moves, and game winners as samples of value function. Then the policy iteration procedure updates the neural networkâs weights to match the move probabilities and value more closely with the improved search probabilities and self-play winner, and conduct self-play with updated neural network weights in the next iteration to make the search stronger. | 1701.07274#126 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 127 | The features of AlphaGo Zero (Silver et al., 2017), comparing with AlphaGo (Silver et al., 2016a), are: 1) it learns from random play, with self-play reinforcement learning, without human data or supervision; 2) it uses black and white stones from the board as input, without any manual feature engineering; 3) it uses a single neural network to represent both policy and value, rather than separate policy network and value network; and 4) it utilizes the neural network for position evaluation and move sampling for MCTS, and it does not perform Monte Carlo rollouts. AlphaGo Zero deploys several recent achievements in neural networks: residual convolutional neural networks (ResNets), batch normalization, and rectiï¬er nonlinearities.
AlphaGo Zero has three main components in its self-play training pipeline executed in parallel asyn- chronously: 1) optimize neural network weights from recent self-play data continually; 2) evaluate players continually; 3) use the strongest player to generate new self-play data.
When AlphaGo Zero playing a game against an opponent, MCTS searches from the current state, with the trained neural network weights, to generate move probabilities, and then selects a move.
We present a brief, conceptual pseudo code in Algorithm 9 for training in AlphaGo Zero, conducive for easier understanding. Refer to the original paper (Silver et al., 2017) for details. | 1701.07274#127 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 128 | We present a brief, conceptual pseudo code in Algorithm 9 for training in AlphaGo Zero, conducive for easier understanding. Refer to the original paper (Silver et al., 2017) for details.
Silver et al. (2017)
# DISCUSSIONS
AlphaGo Zero is a reinforcement learning algorithm. It is neither supervised learning nor unsu- pervised learning. The game score is a reward signal, not a supervision label. Optimizing the loss function l is supervised learning. However, it performs policy evaluation and policy improvement, as one iteration in policy iteration.
AlphaGo Zero is not only a heuristic search algorithm. AlphaGo Zero is a policy iteration proce- dure, in which, heuristic search, in particular, MCTS, plays a critical role, but within the scheme of reinforcement learning policy iteration, as illustrated in the pseudo code in Algorithm 9. MCTS can be viewed as a policy improvement operator.
31
Input: the raw board representation of the position, its history, and the colour to play as 19 à 19 images; game rules; a game scoring function; invariance of game rules under rotation and reï¬ection, and invariance to colour transposition except for komi Output: policy (move probabilities) p, value v
initialize neural network weights θ0 randomly //AlphaGo Zero follows a policy iteration procedure for each iteration i do | 1701.07274#128 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 129 | initialize neural network weights θ0 randomly //AlphaGo Zero follows a policy iteration procedure for each iteration i do
// termination conditions: // 1. both players pass // 2. the search value drops below a resignation threshold // 3. the game exceeds a maximum length
initialize s0 for each step t, until termination at step T do
// MCTS can be viewed as a policy improvement operator // search algorithm: asynchronous policy and value MCTS algorithm (APV-MCTS) // execute an MCTS search Ït = αθiâ1 (st) with previous neural network fθiâ1 // each edge (s, a) in the search tree stores a prior probability P (s, a), a visit count N (s, a), and an action value Q(s, a) while computational resource remains do | 1701.07274#129 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 130 | select: each simulation traverses the tree by selecting the edge with maximum upper confidence bound Q(s,a) + U(s, a), where U(s,a) « P(s,a)/(1+ N(s,a)) expand and evaluate: the leaf node is expanded and the associated position s is evaluated by the neural network, (P(s,-),V(s)) = fo,(s); the vector of P values are stored in the outgoing edges from s backup: each edge (s, a) traversed in the simulation is updated to increment its visit count N(s, a), and to update its action value to the mean evaluation over these simulations, Q(s, a) = 1/N(s,@) )24),a-+5/ V(s"), where sâ|s,a â sâ indicates that a simulation eventually reached sâ after taking move a from position s
end // self-play with search can be viewed as a policy evaluation operator: select each move with the improved MCTS-based policy, uses the game winner as a sample of the value play: once the search is complete, search probabilities Ï â N 1/Ï are returned, where N is the visit count of each move from root and Ï is a parameter controlling temperature; play a move by sampling the search probabilities Ït, transition to next state st+1 | 1701.07274#130 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 131 | # end
score the game to give a ï¬nal reward rT â {â1, +1} for each step t in the last game do zt â ±rT , the game winner from the perspective of the current player store data as (st, Ït, zt)
end sample data (s, Ï, z) uniformly among all time-steps of the last iteration(s) of self-play
//train neural network weights 6; /loptimizing loss function | performs both policy evaluation, via (z â v)?, and policy improvement, via â77 log p, in a single step adjust the neural network (p,v) = fo, (s): to minimize the error between the predicted value v and the self-play winner z, and to maximize similarity of neural network move probabilities p to search probabilities 7 specifically, adjust the parameters @ by gradient descent on a loss function (p,v) = fo,(s) andl = (z â v)? â x7 logp + ell; ||" 1 sums over the mean-squared error and cross-entropy losses, respectively cis a parameter controlling the level of L2 weight regularization to prevent overfitting
evaluate the checkpoint every 1000 training steps to decide if replacing the current best player (neural network weights) for generating next batch of self-play games
# end | 1701.07274#131 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 132 | evaluate the checkpoint every 1000 training steps to decide if replacing the current best player (neural network weights) for generating next batch of self-play games
# end
Algorithm 9: AlphaGo Zero training pseudo code, based on Silver et al. (2017)
32
AlphaGo attains a superhuman level. It may conï¬rm that professionals have developed effective strategies. However, it does not need to mimic professional plays. Thus it does not need to predict their moves correctly.
The inputs to AlphaGo Zero include the raw board representation of the position, its history, and the colour to play as 19 à 19 images; game rules; a game scoring function; invariance of game rules under rotation and reï¬ection, and invariance to colour transposition except for komi. An additional and critical input is solid research and development experiences.
AlphaGo Zero utilized 64 GPU workers (each maybe with multiple GPUs) and 19 CPU parameter servers (each with multiple CPUs) for training, around 2000 TPUs for data generation, and 4 TPUs for game playing. The computation cost is too formidable for replicating AlphaGo Zero.
AlphaGo requires huge amount of data for training, so it is still a big data issue. However, the data can be generated by self play, with a perfect model or precise game rules. | 1701.07274#132 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 133 | AlphaGo requires huge amount of data for training, so it is still a big data issue. However, the data can be generated by self play, with a perfect model or precise game rules.
Due to the perfect model or precise game rules for computer Go, AlphaGo algorithms have their limitations. For example, in healthcare, robotics and self driving problems, it is usually hard to collect a large amount of data, and it is hard or impossible to have a close enough or even perfect model. As such, it is nontrivial to directly apply AlphaGo algorithms to such applications.
On the other hand, AlphaGo algorithms, especially, the underlying techniques, namely, deep learn- ing, reinforcement learning, and Monte Carlo tree search, have many applications. Silver et al. (2016a) and Silver et al. (2017) recommended the following applications: general game-playing (in particular, video games), classical planning, partially observed planning, scheduling, constraint satisfaction, robotics, industrial control, and online recommendation systems. AlphaGo Zero blog mentioned the following structured problems: protein folding, reducing energy consumption, and searching for revolutionary new materials.2
AlphaGo has made tremendous progress, and set a landmark in AI. However, we are still far away from attaining artiï¬cial general intelligence (AGI). | 1701.07274#133 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 134 | AlphaGo has made tremendous progress, and set a landmark in AI. However, we are still far away from attaining artiï¬cial general intelligence (AGI).
It is interesting to see how strong a raw deep neural network in AlphaGo can become, and how soon a very strong computer Go program would be available on a mobile phone.
IMPERFECT INFORMATION BOARD GAMES
Imperfect information games, or game theory in general, have many applications, e.g., security and medical decision support. It is interesting to see more progress of deep RL in such applications, and the full version of Texas Holdâem.
Heinrich and Silver (2016) proposed Neural Fictitious Self-Play (NFSP) to combine ï¬ctitious self- play with deep RL to learn approximate Nash equilibria for games of imperfect information in a scalable end-to-end approach without prior domain knowledge. NFSP was evaluated on two-player zero-sum games. In Leduc poker, NFSP approached a Nash equilibrium, while common RL methods diverged. In Limit Texas Holdâem, a real-world scale imperfect-information game, NFSP performed similarly to state-of-the-art, superhuman algorithms which are based on signiï¬cant domain expertise. | 1701.07274#134 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 136 | 2Andrej Karpathy posted a blog titled âAlphaGo, in contextâ, after AlphaGo defeated Ke Jie in May 2017. He characterized properties of Computer Go as: fully deterministic, fully observable, discrete action space, accessible perfect simulator, relatively short episode/game, clear and fast evaluation conducive for many trail- and-errors, and huge datasets of human play games, to illustrate the narrowness of AlphaGo. It is true that computer Go has limitations in the problem setting and thus potential applications, and is far from artiï¬cial general intelligence. However, we see the success of AlphaGo as the triumph of AI, in particular, AlphaGoâs underlying techniques, i.e., learning from demonstration (as supervised learning), deep learning, reinforcement learning, and Monte Carlo tree search; these techniques are present in many recent achievements in AI. As a whole technique, AlphaGo will probably shed lights on classical AI areas, like planning, scheduling, and constraint satisfaction (Silver et al., 2016a), and new areas for AI, like retrosynthesis (Segler et al., 2017). Reportedly, the success of AlphaGoâs conquering titanic search space inspired quantum physicists to solve the quantum many-body problem (Carleo and Troyer, 2017).
33
# DEEPSTACK | 1701.07274#136 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 137 | 33
# DEEPSTACK
Recently, signiï¬cant progress has been made for Heads-up No-Limit Holdâem Poker (MoravËc´ık et al., 2017), the DeepStack computer program defeated professional poker players for the ï¬rst time. DeepStack utilized the recursive reasoning of CFR to handle information asymmetry, focusing computation on speciï¬c situations arising when making decisions and use of value functions trained automatically, with little domain knowledge or human expert games, without abstraction and ofï¬ine computation of complete strategies as before.
# 5.1.3 VIDEO GAMES
Video games would be great testbeds for artiï¬cial general intelligence. | 1701.07274#137 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 138 | # 5.1.3 VIDEO GAMES
Video games would be great testbeds for artiï¬cial general intelligence.
Wu and Tian (2017) deployed A3C with CNN to train an agent in a partially observable 3D envi- ronment, Doom, from recent four raw frames and game variables, to predict next action and value function, following the curriculum learning (Bengio et al., 2009) approach of starting with simple tasks and gradually transition to harder ones. It is nontrivial to apply A3C to such 3D games directly, partly due to sparse and long term reward. The authors won the champion in Track 1 of ViZDoom Competition by a large margin, and plan the following future work: a map from an unknown envi- ronment, localization, a global plan to act, and visualization of the reasoning process.
Dosovitskiy and Koltun (2017) approached the problem of sensorimotor control in immersive en- vironments with supervised learning, and won the Full Deathmatch track of the Visual Doom AI Competition. We list it here since it is usually a RL problem, yet it was solved with supervised learning. Lample and Chaplot (2017) also discussed how to tackle Doom. | 1701.07274#138 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 139 | Peng et al. (2017b) proposed a multiagent actor-critic framework, with a bidirectionally-coordinated network to form coordination among multiple agents in a team, deploying the concept of dynamic grouping and parameter sharing for better scalability. The authors used StarCraft as the testbed. Without human demonstration or labelled data as supervision, the proposed approach learned strate- gies for coordination similar to the level of experienced human players, like move without collision, hit and run, cover attack, and focus ï¬re without overkill. Usunier et al. (2017); Justesen and Risi (2017) also studied StarCraft.
Oh et al. (2016) and Tessler et al. (2017) studied Minecraft, Chen and Yi (2017); Firoiu et al. (2017) studied Super Smash Bros, and Kansky et al. (2017) proposed Schema Networks and empirically studied variants of Breakout in Atari games. | 1701.07274#139 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 140 | See Justesen et al. (2017) for a survey about applying deep (reinforcement) learning to video games. See OntaËn´on et al. (2013) for a survey about Starcraft. Check AIIDE and CIG Starcraft AI Compe- titions, and its history at https://www.cs.mun.ca/Ëdchurchill/starcraftaicomp/history.shtml. See Lin et al. (2017) for StarCraft Dataset.
5.2 ROBOTICS
Robotics is a classical area for reinforcement learning. See Kober et al. (2013) for a survey of RL in robotics, Deisenroth et al. (2013) for a survey on policy search for robotics, and Argall et al. (2009) for a survey of robot learning from demonstration. See the journal Science Robotics. It is interesting to note that from NIPS 2016 invited talk, Boston Dynamics robots did not use machine learning. | 1701.07274#140 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 141 | In the following, we discuss guided policy search (Levine et al., 2016a) and learn to navi- gate (Mirowski et al., 2017). See more recent robotics papers, e.g., Chebotar et al. (2016; 2017); Duan et al. (2017); Finn and Levine (2016); Gu et al. (2016a); Lee et al. (2017); Levine et al. (2016b); Mahler et al. (2017); P´erez-DâArpino and Shah (2017); Popov et al. (2017); Yahya et al. (2016); Zhu et al. (2017b).
We recommend Pieter Abbeelâs NIPS 2017 Keynote Speech, Deep Learning for Robotics, slides at, https://www.dropbox.com/s/fdw7q8mx3x4wr0c/
34
5.2.1 GUIDED POLICY SEARCH | 1701.07274#141 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 142 | 34
5.2.1 GUIDED POLICY SEARCH
Levine et al. (2016a) proposed to train the perception and control systems jointly end-to-end, to map raw image observations directly to torques at the robotâs motors. The authors introduced guided policy search (GPS) to train policies represented as CNN, by transforming policy search into su- pervised learning to achieve data efï¬ciency, with training data provided by a trajectory-centric RL method operating under unknown dynamics. GPS alternates between trajectory-centric RL and su- pervised learning, to obtain the training data coming from the policyâs own state distribution, to address the issue that supervised learning usually does not achieve good, long-horizon performance. GPS utilizes pre-training to reduce the amount of experience data to train visuomotor policies. Good performance was achieved on a range of real-world manipulation tasks requiring localization, visual tracking, and handling complex contact dynamics, and simulated comparisons with previous policy search methods. As the authors mentioned, âthis is the ï¬rst method that can train deep visuomotor policies for complex, high-dimensional manipulation skills with direct torque controlâ.
5.2.2 LEARN TO NAVIGATE | 1701.07274#142 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 143 | 5.2.2 LEARN TO NAVIGATE
Mirowski et al. (2017) obtained the navigation ability by solving a RL problem maximizing cumu- lative reward and jointly considering un/self-supervised tasks to improve data efï¬ciency and task performance. The authors addressed the sparse reward issues by augmenting the loss with two auxiliary tasks, 1) unsupervised reconstruction of a low-dimensional depth map for representation learning to aid obstacle avoidance and short-term trajectory planning; 2) self-supervised loop clo- sure classiï¬cation task within a local trajectory. The authors incorporated a stacked LSTM to use memory at different time scales for dynamic elements in the environments. The proposed agent learn to navigate in complex 3D mazes end-to-end from raw sensory input, and performed similarly to human level, even when start/goal locations change frequently.
In this approach, navigation is a by-product of the goal-directed RL optimization problem, in con- trast to conventional approaches such as Simultaneous Localisation and Mapping (SLAM), where explicit position inference and mapping are used for navigation. This may have the chance to replace the popular SLAM, which usually requires manual processing.
5.3 NATURAL LANGUAGE PROCESSING | 1701.07274#143 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 144 | 5.3 NATURAL LANGUAGE PROCESSING
In the following we talk about natural language processing (NLP), dialogue systems in Section 5.3.1, machine translation in Section 5.3.2, and text generation in Section 5.3.3. There are many interesting issues in NLP, and we list some in the following.
language tree-structure learning, e.g., Socher et al. (2011; 2013); Yogatama et al. (2017) ⢠semantic parsing, e.g., Liang et al. (2017b) ⢠question answering, e.g., Celikyilmaz et al. (2017), Shen et al. (2017), Trischler et al.
(2016), Xiong et al. (2017a), and Wang et al. (2017a), Choi et al. (2017)
summarization, e.g., Paulus et al. (2017); Zhang and Lapata (2017) ⢠sentiment analysis (Liu, 2012; Zhang et al., 2018), e.g., Radford et al. (2017) ⢠information retrieval (Manning et al., 2008), e.g., Zhang et al. (2016), and Mitra and
Craswell (2017) | 1701.07274#144 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 145 | Craswell (2017)
information extraction, e.g., Narasimhan et al. (2016) ⢠automatic query reformulation, e.g., Nogueira and Cho (2017) ⢠language to executable program, e.g., Guu et al. (2017) ⢠knowledge graph reasoning, e.g., Xiong et al. (2017c) ⢠text games, e.g., Wang et al. (2016a), He et al. (2016b), and Narasimhan et al. (2015)
Deep learning has been permeating into many subareas in NLP, and helping make signiï¬cant progress. The above is a partial list. It appears that NLP is still a ï¬eld, more about synergy than competition, for deep learning vs. non-deep learning algorithms, and for approaches based on no domain knowledge (end-to-end) vs linguistics knowledge. Some non-deep learning algorithms are
35 | 1701.07274#145 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 146 | 35
effective and perform well, e.g., word2vec (Mikolov et al., 2013; Mikolov et al., 2017) and fast- Text (Joulin et al., 2017), and many works that study syntax and semantics of languages, see a recent example in semantic role labeling (He et al., 2017). Some deep learning approaches to NLP problems incorporate explicitly or implicitly linguistics knowledge, e.g., Socher et al. (2011; 2013); Yogatama et al. (2017). See an article by Christopher D. Manning, titled âLast Words: Computa- tional Linguistics and Deep Learning, A look at the importance of Natural Language Processingâ, at http://mitp.nautil.us/article/170/last-words-computational-linguistics-and-deep-learning.
Melis et al. (2017)
5.3.1 DIALOGUE SYSTEMS | 1701.07274#146 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 147 | In dialogue systems, conversational agents, or chatbots, human and computer interacts with natu- ral language. We intentionally remove âspokenâ before âdialogue systemsâ to accommodate both spoken and written language user interface (UI). Jurafsky and Martin (2017) categorize dialogue systems as task-oriented dialog agents and chatbots; the former are set up to have short conversa- tions to help complete particular tasks; the latter are set up to mimic human-human interactions with extended conversations, sometimes with entertainment value. As in Deng (2017), there are four categories: social chatbots, infobots (interactive question answering), task completion bots (task-oriented or goal-oriented) and personal assistant bots. We have seen generation one dialogue systems: symbolic rule/template based, and generation two: data driven with (shallow) learning. We are now experiencing generation three: data driven with deep learning, and reinforcement learning usually play an important role. A dialogue system usually include the following modules: (spoken) language understanding, dialogue manager (dialogue state tracker and dialogue policy learning), and a natural language generation (Young et al., 2013). In task-oriented systems, there is usually a knowledge base | 1701.07274#147 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 148 | dialogue manager (dialogue state tracker and dialogue policy learning), and a natural language generation (Young et al., 2013). In task-oriented systems, there is usually a knowledge base to query. A deep learning approach, as usual, attempts to make the learning of the system parameters end-to-end. See Deng (2017) for more details. See a survey paper on applying machine learning to speech recognition (Deng and Li, 2013). | 1701.07274#148 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 149 | Li et al. (2017b) presented an end-to-end task-completion neural dialogue system with parameters learned by supervised and reinforcement learning. The proposed framework includes a user sim- ulator (Li et al., 2016d) and a neural dialogue system. The user simulator consists of user agenda modelling and natural language generation. The neural dialogue system is composed of language understanding and dialogue management (dialogue state tracking and policy learning). The authors deployed RL to train dialogue management end-to-end, representing the dialogue policy as a deep Q-network (Mnih et al., 2015), with the tricks of a target network and a customized experience re- play, and using a rule-based agent to warm-start the system with supervised learning. The source code is available at http://github.com/MiuLab/TC-Bot. | 1701.07274#149 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 150 | Dhingra et al. (2017) proposed KB-InfoBot, a goal-oriented dialogue system for multi-turn infor- mation access. KB-InfoBot is trained end-to-end using RL from user feedback with differentiable operations, including those for accessing external knowledge database (KB). In previous work, e.g., Li et al. (2017b) and Wen et al. (2017), a dialogue system accesses real world knowledge from KB by symbolic, SQL-like operations, which is non-differentiable and disables the dialogue system from fully end-to-end trainable. KB-InfoBot achieved the differentiability by inducing a soft pos- terior distribution over the KB entries to indicate which ones the user is interested in. The authors designed a modiï¬ed version of the episodic REINFORCE algorithm to explore and learn both the policy to select dialogue acts and the posterior over the KB entries for correct retrievals.The authors deployed imitation learning from rule-based belief trackers and policy to warm up the system. | 1701.07274#150 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
1701.07274 | 151 | Su et al. (2016b) proposed an on-line learning framework to train the dialogue policy jointly with the reward model via active learning with a Gaussian process model, to tackle the issue that it is unreliable and costly to use explicit user feedback as the reward signal. The authors showed em- pirically that the proposed framework reduced manual data annotations signiï¬cantly and mitigated noisy user feedback in dialogue policy learning.
Li et al. (2016c) proposed to use deep RL to generate dialogues to model future reward for better informativity, coherence, and ease of answering, to attempt to address the issues in the sequence to sequence models based on Sutskever et al. (2014): the myopia and misalignment of maximizing the probability of generating a response given the previous dialogue turn, and the inï¬nite loop of repetitive responses. The authors designed a reward function to reï¬ect the above desirable properties,
36
and deployed policy gradient to optimize the long term reward. It would be interesting to investigate the reward model with the approach in Su et al. (2016b) or with inverse RL and imitation learning as discussed in Section 3.3, although Su et al. (2016b) mentioned that such methods are costly, and humans may not act optimally. | 1701.07274#151 | Deep Reinforcement Learning: An Overview | We give an overview of recent exciting achievements of deep reinforcement
learning (RL). We discuss six core elements, six important mechanisms, and
twelve applications. We start with background of machine learning, deep
learning and reinforcement learning. Next we discuss core RL elements,
including value function, in particular, Deep Q-Network (DQN), policy, reward,
model, planning, and exploration. After that, we discuss important mechanisms
for RL, including attention and memory, unsupervised learning, transfer
learning, multi-agent RL, hierarchical RL, and learning to learn. Then we
discuss various applications of RL, including games, in particular, AlphaGo,
robotics, natural language processing, including dialogue systems, machine
translation, and text generation, computer vision, neural architecture design,
business management, finance, healthcare, Industry 4.0, smart grid, intelligent
transportation systems, and computer systems. We mention topics not reviewed
yet, and list a collection of RL resources. After presenting a brief summary,
we close with discussions.
Please see Deep Reinforcement Learning, arXiv:1810.06339, for a significant
update. | http://arxiv.org/pdf/1701.07274 | Yuxi Li | cs.LG | Please see Deep Reinforcement Learning, arXiv:1810.06339, for a
significant update | null | cs.LG | 20170125 | 20181126 | []
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.