doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2306.16803
85
Finally, Temporal Difference (TD) learning [80] has a rich history of leveraging proximity in time as a proxy for credit assignment [2]. TD(λ) [80] considers eligibility traces to trade off bias and variance, crediting past state-action pairs in the trajectory for the current reward proportional to how close they are in time. van Hasselt et al. [81] estimate expected eligibility traces, taking into account that the same rewarding state can be reached from various previous states. Hence, not only the past state-actions on the trajectory are credited and updated, but also counterfactual ones that lead to the same rewarding state. Extending the insights from COCOA towards temporal difference methods is an exciting direction for future research. # B Undiscounted infinite-horizon MDPs In this work, we consider an undiscounted MDP with a finite state space S, bounded rewards and an infinite horizon. To ensure that the expected return and value functions remain finite, we require some standard regularity conditions [2]. We assume the MDP contains an absorbing state s∞ that transitions only to itself and has zero reward. Moreover, we assume proper transition dynamics, meaning that an agent following any policy will eventually end up in the absorbing state s∞ with probability one as time goes to infinity.
2306.16803#85
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
86
The discounted infinite horizon MDP formulation is a special case of this setting, with a specific class of transition dynamics. An explicit discounting of future rewards )7,.. 7" R;, with discount factor y € [0, 1], is equivalent to considering the above undiscounted MDP setting, but modifying the state transition probability function p(S;41 | S:, Az) such that each state-action pair has a fixed probability (1 — +) of transitioning to the absorbing state [2]. Hence, all the results considered in this work can be readily applied to the discounted MDP setting, by modifying the environment transitions as outlined above. We can also explicitly incorporate temporal discounting in the policy gradients and contribution coefficients, which we outline in App. J. The undiscounted infinite-horizon MDP with an absorbing state can also model episodic RL with a fixed time horizon. We can include time as an extra feature in the states S, and then have a probability of 1 to transition to the absorbing state when the agent reaches the final time step. 3 (a) (b) Figure 6: (a) Graphical model of the MDP. (b) Graphical model of the MDP, where we abstracted time. # C Theorems, proofs and additional information for Section 3
2306.16803#86
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
88
Abstracting time. In an undiscounted environment, it does not matter at which point in time the agent achieves the rewarding outcome u’. Hence, the contribution coefficients (3) sum over all future time steps, to reflect that the rewarding outcome u’ can be encountered at any future time, and to incorporate the possibility of encountering wu’ multiple times. Note that when using temporal discounting, we can adjust the contribution coefficients accordingly (c.f. App J). Hindsight distribution. To obtain the hindsight distribution p(A; = a | S; = s,U’ = wu’), we convert the classical graphical model of an MDP (c.f. Fig. 6a) into a graphical model that incorporates the time k into a separate node (c.f. Fig. 6b). Here, we rewrite p(U; = u’ | S = s,A = a), the probability distribution of a rewarding outcome Uj, k time steps later, as p(U’ = u’ | S = 8, A = a, K =k). By giving K the geometric distribution p(k = k) = (1 — 8)8*~1 for some 8, we can rewrite the infinite sums used in the time-independent contribution coefficients as a marginalization over K: =u’ =u’
2306.16803#88
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
89
p(Uk = u′ | S = s, A = a) = lim β→1 p(Uk = u′ | S = s, A = a)βk−1 k≥1 (5) k≥1 1 1 − β : 1 , lho c—bklK =k km iB dw uw |S=s,A=a,K =k)p(K =k) = lim β→1 1 1 − β p(U ′ = u′ | S = s, A = a) (7) Note that this limit is finite, as for k → ∞, the probability of reaching an absorbing state s∞ and corresponding rewarding outcome u∞ goes to 1 (and we take u′ ̸= u∞). Via Bayes rule, we then have that en pUp =u' | S=s,A=a) w(s,a,u’) Soi Pe =WTS=s) -1 (8) = p(U ′ = u′ | S = s, A = a) p(U ′ = u | S = s) − 1 = p(A = a | S = s, U ′ = u′) π(a | s) − 1 (9)
2306.16803#89
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
90
where we assume π(a | s) > 0 and where we dropped the limit of β → 1 in the notation, as we will always take this limit henceforth. Note that when π(a | s) = 0, the hindsight distribution pπ(a | s, u′) is also equal to zero, and hence the right-hand-side of the above equation is undefined. However, the middle term using pπ(U = u′ | S = s, A = a) is still well-defined, and hence we can use the contribution coefficients even for actions where π(a | s) = 0. C.2 Proof Theorem 1 We start with a lemma showing that the contribution coefficients can be used to estimate the advantage Aπ(s, a) = Qπ(s, a)−V π(s), by generalizing Theorem 1 of Harutyunyan et al. [1] towards rewarding outcomes. Definition 3 (Fully predictive, repeated from main text). A rewarding outcome encoding U is fully predictive of the reward R, if the following conditional independence condition holds: pπ(Rk = r | 4 (6) S0 = s, A0 = a, Uk = u) = pπ(R = r | U = u), where the right-hand side does not depend on the time k.
2306.16803#90
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
91
S0 = s, A0 = a, Uk = u) = pπ(R = r | U = u), where the right-hand side does not depend on the time k. Lemma 6. For each state-action pair (s, a) with π(a | s) > 0, and assuming that u = f (s, a, r) is fully predictive of the reward (c.f. Definition 3), we have that A"(s,a) = r(s,a) — Ss m(a| s)r(s,@) + Err(s,x) So w(s,4, Uy) Re (10) aveA k>1 with the advantage function Aπ, and the reward function r(s, a) ≜ E[R | s, a]. Proof. We rewrite the undiscounted state-action value function in the limit of a discounting factor β → 1− (the minus sign indicating that we approach 1 from the left): Q(s,a) = ae ErvT(s,a,n) dR en) = r(s, a) + lim β→1− βkpπ(Rk = r | s, a)r (12) # TER # SI
2306.16803#91
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
92
= r(s, a) + lim β→1− βkpπ(Rk = r | s, a)r (12) # TER # SI =r(s,a)+ lim Ss Ss SO Bkp" (Re =7,U, =u|s,a)r (13) Boi rEeR uel k>1 # rEeR # uel k>1 = r(s, a) + lim β→1− u∈U βkpπ(R = r | U = u)rpπ(Uk = u | s, a) (14) # rEeR k>1 = r(s, a) + lim β→1− r(u) u∈U βkpπ(Uk = u | s, a) k≥1 (15) 1 BY Dp (Op =u 8, =r(s,a)+ lim r(u) S> Bp" (Ux u 8 a1 BY Phe =u 5:4) (16) gah 2a oe Ya Pw =u 1s)
2306.16803#92
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
93
where we use the property that u is fully predictive of the reward (c.f. Definition 2), and where we define r(s, a) ≜ E[R | s, a] and r(u) ≜ E[R | u] Using the graphical model of Fig. 6b where we 1−β pπ k=0 βkpπ(Rk = r | s, a). abstract time k (c.f. App. C.1), we have that Leveraging Bayes rule, we get β(U ′ = u | s, a)π(a | s) pπ pπ β(U ′ = u | s) (17) Where we dropped the limit of β → 1− in the notation of pπ(a | s, R = r), as we henceforth always consider this limit. Taking everything together, we have that p*(a|s,U' =u) r Poa) =rls.0)4+ Dv Wes | PT Or as) = r(s, a) + pπ(Uk = u | s)(w(s, a, u) + 1)r(u) (19) # we k>1
2306.16803#93
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
94
= r(s, a) + pπ(Uk = u | s)(w(s, a, u) + 1)r(u) (19) # we k>1 =r(s,a) + Ss Sop" (Ux =u|s)(w(s,a,u) +1) Ss P(Re=r|Up=u)r (20) uc k>1 reR =1r(s,a) + Epvr(s,n) Sls, a, Ux) + 1)Re (21) kD Subtracting the value function, we get A(s,a) = r(s,a) — > a(a’ | s)r(s,a’) + Errri(s,n) Ss w(s,a, Uy) Ri (22) aleA k>1 5 Now we are ready to prove Theorem 1. Proof. Using the policy gradient theorem [26], we have VoV" (so) =Erarison) | >, >, Vor(a | $:)A*(St,) (23) [t>0aeA
2306.16803#94
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
95
VoV" (so) =Erarison) | >, >, Vor(a | $:)A*(St,) (23) [t>0aeA = Exn7(so,n) > > Vor(a | Sz) | r(St,a) + > w(St, a, Ur+n) Risk (24) [t>0aeA k>1 = Epnt(so,n) > Vo log m(At | St) Re + > Vor(a | St) Ye w(Si,4, Ur+n) Riek t>0 acA k>1 (25) where we used that removing a baseline }°,,<47(a’ | s)r(s,a’) independent from the actions does not change the policy gradient, and replaced Epoz(s,2)[Doac.a Vo7(@ | St)r( St, @)] by its sampled version E77 (so,x)[Ve log m(At | $:)R:]. Hence the policy gradient estimator of Eq. 4 is unbiased.
2306.16803#95
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
96
C.3 Different policy gradient estimators leveraging contribution coefficients. In this work, we use the COCOA estimator of Eq. 4 to estimate policy gradients leveraging the contribution coefficients of Eq. 3, as this estimator does not need a separate reward model, and it works well for the small action spaces we considered in our experiments. However, one can design other unbiased policy gradient estimators compatible with the same contribution coefficients. Harutyunyan et al. [1] introduced the HCA gradient estimator of Eq. 2, which can readily be extended to rewarding outcomes U : VoV™ =D Vor(a| $1)(r(Si,a) + > w(Sr,4, Uren) Reve) (26) t>0aeA k>1 This gradient estimator uses a reward model r(s, a) to obtain the rewards corresponding to counter- factual actions, whereas the COCOA estimator (4) only uses the observed rewards Rk. For large action spaces, it might become computationally intractable to sum over all possible actions. In this case, we can sample independent actions from the policy, instead of summing over all actions, leading to the following policy gradient.
2306.16803#96
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
97
a 1 ve VoV" = > Vo log (Ay | S;)Ri + Vi > Vo loga(a™ | S;) > w(S,,a™, Urs) Rye (27) 120 ™m k=l where we sample M actions from am ∼ π(· | St). Importantly, for obtaining an unbiased policy gradient estimate, the actions am should be sampled independently from the actions used in the trajectory of the observed rewards Rt+k. Hence, we cannot take the observed At as a sample am, but need to instead sample independently from the policy. This policy gradient estimator can be used for continuous action spaces (c.f. App. G, and can also be combined with the reward model as in Eq. 26. One can show that the above policy gradient estimators are unbiased with a proof akin to the one of Theorem 1. Comparison of COCOA to using time as a credit assignment heuristic. briefly the following discounted policy gradient: In Section 2 we discussed REINFORCE, 177 _ . c Vi V"(so) = so V8 log (At | 5) .7" Ry (8)
2306.16803#97
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
98
REINFORCE, 177 _ . c Vi V"(so) = so V8 log (At | 5) .7" Ry (8) with discount factor γ ∈ [0, 1]. Note that we specifically put no discounting γt in front of ∇θ log π(At | St), which would be required for being an unbiased gradient estimate of the dis- counted expected total return, as the above formulation is most used in practice [12, 27]. Rewriting 6 the summations reveals that this policy gradient uses time as a heuristic for credit assignment: ˆ∇REINFORCE,γ θ V π(s0) = t≥0 Rt k≤t γt−k∇θ log π(Ak | Sk). (29) We can rewrite the COCOA gradient estimator (4) with the same reordering of summation, showcasing that it leverages the contribution coefficient for providing precise credit to past actions, instead of using the time discounting heuristic: vy V7 (so) =SoR Vo log (Ay | St) + Ss Ss w(Sp, a, U,)Vor(a | 7) (30) t>0 k<tacA
2306.16803#98
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
99
C.4 Proof of Proposition 2 Proof. Proposition 2 assumes that each action sequence {Am}t+k m=t leads to a unique state s′. Hence, all previous actions can be decoded perfectly from the state s′, leading to pπ(At = a | St, S′ = s′) = δ(a = at), with δ the indicator function and at the action taken in the trajectory that led to s′. Filling this into the COCOA gradient estimator leads to ag d(a=A, VV" (so) = So Vo log (Ay | St).Re + Ss Vor(a| St) Ss arn - 1) Risk (31) +0 acA kS1 Von(At | St) = Risk (32) eta TS) ™(At | Si) » = Ss Vo log (At | St) Ss Ritk (33) t>0 k>0 t>0 where we used that }>,, a ∇θπ(a | s) = 0. where we used that }>,, Vo7(a | s) = 0. # C.5 Proof Theorem 3
2306.16803#99
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
100
a ∇θπ(a | s) = 0. where we used that }>,, Vo7(a | s) = 0. # C.5 Proof Theorem 3 Proof. Theorem 3 considers the case where the environment only contains a reward at the final time step t = T , and where we optimize the policy only on a single (initial) time step t = 0. Then, the policy gradients are given by Vi ,V"(Ur, Rr) = Vor(a | s)w(s,a,Ur)Rr (34) a V π(A0, RT ) = ∇θ log π(A0 | s)RT ˆ∇REINFORCE θ (35) With U either S, R or Z, and s the state at t = 0. As only the last time step can have a reward by assumption, and the encoding U needs to retain the predictive information of the reward, the contribution coefficients for u corresponding to a nonzero reward are equal to w(s, a, u) = pπ(A0 = a | S0 = s, UT = u) π(a | s) − 1 (36)
2306.16803#100
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
101
w(s, a, u) = pπ(A0 = a | S0 = s, UT = u) π(a | s) − 1 (36) The coefficients corresponding to zero-reward outcomes are multiplied with zero in the Now, we proceed by showing that gradient estimator, and can hence be ignored. E[ ˆ∇REINFORCE θ V π(ST , RT ) | U ′ T , RT ] = T , RT ), E[ ˆ∇U ′ ˆ∇U ′ θ V π(UT , RT ) | θ V π(U ′ RT ] = ˆ∇R θ V π(RT ), after which we can use the law of total variance to prove our theorem. As ST is fully predictive of RT , the following conditional independence holds pπ(A0 | S0 = s, ST , RT ) = pπ(A0 | S0 = s, ST ) (c.f. Definition 2). Hence, we have that E[ ˆ∇REINFORCE V π(A0, RT ) | ST , RT ] (37)
2306.16803#101
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
102
E[ ˆ∇REINFORCE V π(A0, RT ) | ST , RT ] (37) ∇θπ(a | s) π(a | s) = p(A0 = a | S0 = s, ST ) RT = ˆ∇S θ V π(ST , RT ) (38) acA where we used that }>,, a ∇θπ(a | s) = ∇θ a π(a | s) = 0. 7 Similarly, as ST is fully predictive of U ′ T (c.f. Definition 2), we have that Similarly, as S'p is fully predictive of Uy, (c.f. Definition 2), we have that p(A | S, U ′ T ) = p(ST = sT | S, U ′ T )p(A | S, ST = sT , U ′ T ) (39) # spEes = p(ST = sT | S, U ′ T )p(A | S, ST = sT ) (40) sT ∈S Using the conditional independence relation pπ(ST | S0, U ′ from d-separation in Fig. 6b, this leads us to T , RT ) = pπ(ST | S0, U ′ T ), following
2306.16803#102
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
103
E[ ˆ∇S T , RT ] (41) E[V3V" (Sr, Rr) | Up, = ST YP plSr = sr | So = 5, Uf)p(Ap = 0| $= 5, Sp = 57) 1) pp (an) aed snes n(a|s) So p(Ao =4| S=s, Up) ma (43) acA . # a∈A = ˆ∇U ′ θ V π(U ′ T , RT ) (44) Using the same derivation leveraging the fully predictive properties (c.f. Definition 2), we get E[ ˆ∇U ′ (45) T , RT ) | UT , RT ] = ˆ∇U θ V π(UT , RT ) | RT ] = ˆ∇R # θ V π(U ′ E[ ˆ∇U # θ V π(UT , RT ) θ V π(RT ) (46) Now we can use the law of total variance, which states that V[X] = E[V[X | Y ]] + V[E[X | Y ]]. Hence, we have that
2306.16803#103
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
104
V[ ˆ∇REINFORCE θ V π] = V[E[ ˆ∇REINFORCE V π | ST ]] + E[V[ ˆ∇REINFORCE θ θ (47) # V π | ST ]] θ V π] = V[ ˆ∇S θ V π] + E[V[ ˆ∇REINFORCE V π | ST ]] ≽ V[ ˆ∇S θ as E[V[ ˆ∇REINFORCE θ pairs, we arrive at V π | ST ]] is positive semi definite. Using the same construction for the other V[ ˆ∇R θ V π(s0)] ≼ V[ ˆ∇U θ V π(s0)] ≼ V[ ˆ∇U ′ θ V π(s0)] ≼ V[ ˆ∇S θ V π(s0)] ≼ V[ ˆ∇REINFORCE θ V π(s0)] (49) thereby concluding the proof.
2306.16803#104
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
105
thereby concluding the proof. Additional empirical verification. To get more insight into how the information content of the rewarding-outcome encoding U relates to the variance of the COCOA gradient estimator, we repeat the experiment of Fig. 2, but now plot the variance as a function of the amount of information in the rewarding outcome encodings, for a fixed state overlap of 3. Fig. 7 shows that the variance of the resulting COCOA gradient estimators interpolate between COCOA-reward and HCA+, with more informative encodings leading to a higher variance. These empirical results show that the insights of Theorem 3 hold in this more general setting of estimating full policy gradients in an MDP with random rewards. # C.6 Proof of Theorem 4 Proof. This proof follows a similar technique to the policy gradient theorem [26]. Let us first define the expected number of occurrences of u starting from state s as O"(u, 8) = 0,5, p"(Uk = u | So = s), and its action equivalent O7(u, s,a) = 0,51 p"(Uk = u| So = s, Ao = a). Expanding O* (u, 8) leads to 7
2306.16803#105
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
107
# Number of groups per reward value Figure 7: Less informative rewarding outcome encodings lead to gradient estimators with lower variance. Normalized variance in dB using ground-truth coefficients and a random uniform policy, for various gradient estimators on the tree envrionment (shaded region represents standard error over 10 random environments). To increase the information content of U , we increase the number of different encodings u corresponding to the same reward value (c.f. ng in Section E.6), indicated by the x-axis. COCOA-u-group indicates the COCOA estimator with rewarding outcome encodings of increasing information content, whereas COCOA-reward and HCA+ have fixed rewarding outcome encodings of U = R and U = S respectively. Now define $(u, 8) = oae4 Vor(a | s)O7(u, 8, a), and p(S; = s’ | So = s) as the probability of reaching s’ starting from s in / steps. We have that }°.,<.5 p"(Si = 8” | So = s)p"(Si = 8’ | So s”) = p"(Si41 = 8’ | So = 8). Leveraging this relation and recursively applying the above equation leads to
2306.16803#107
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
108
VO" (u, s) = Ss Sop" (Si =s'| So =s)¢(u,s’) (52) s/ES 1=0 1=0 = Ss Sop" (Si =s'| Sy) =s) Ss Vor(a | s’)O"(u, s’,a) (53) s/ES 1=0 acA xEsat(sn) », Vor(a| $)O*(u, S,a) (54) acA acA where in the last step we normalized }77°y p"(S) = s’ | So = 8) with Deg Deo "(Si = 8" | So = 8), resulting in the state distribution S' ~ T(s,7) where S is sampled from trajectories starting from s and following policy 7. Finally, we can rewrite the contribution coefficients as w(s, a, u) = Oπ(u, s, a) Oπ(u, s) − 1 (55) which leads to ∇θOπ(u, s) ∝ ES∼T (s,π) ∇θπ(a | S)w(S, a, u)Oπ(u, S) a∈A (56) where we used that
2306.16803#108
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
109
where we used that a∈A ∇θπ(a | s) = 0, thereby concluding the proof. where we used that 57-4 Vo7(a | s) = 0, thereby concluding the proof. # D Learning the contribution coefficients # D.1 Proof Proposition 5 Proof. In short, as the logits l are a deterministic function of the state s, they do not provide any further information on the hindsight distribution and we have that pπ(A0 = a | S0 = s, L0 = l, U ′ = u′) = pπ(A0 = a | S0 = s, U ′ = u′). 9 We can arrive at this result by observing that p(A0 = a | S0 = s, L0 = l) = p(A0 = a | S0 = s) = π(a | s), as the policy logits encode the policy distribution. Using this, we have that pπ(A0 = a | S0 = s, L0 = l, U ′ = u′) (57) = pπ(A0 = a, U ′ = u′ | S0 = s, L0 = l) pπ(U ′ = u′ | S0 = s, L0 = l) (58) — =
2306.16803#109
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
111
= = pπ(A0 = a | S0 = s, U ′ = u′) (62) D.2 Learning contribution coefficients via contrastive classification. Instead of learning the hindsight distribution, we can estimate the probability ratio pπ(At = a | St = s, U ′ = u′)/π(a | s) directly, by leveraging contrastive classification. Proposition 7 shows that by training a binary classifier D(a, s, u′) to distinguish actions sampled from pπ(At = a | St = s, R′ = r′) versus π(a | s), we can directly estimate the probability ratio. Proposition 7. Consider the contrastive loss [Eaxps(als,u’) log D(a, 8, u’)] + Eaxn(als) [log (1 — D(a,s,u’))]], L = Es,u′∼T (s0,π) and D∗(s, a, u′) its minimizer. Then the following holds (63) w(s, a, u′) = D∗(a, s, u′) 1 − D∗(a, s, u′) − 1. (64)
2306.16803#111
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
112
w(s, a, u′) = D∗(a, s, u′) 1 − D∗(a, s, u′) − 1. (64) Proof. Consider a fixed pair s,u’. We can obtain the discriminator D*(a, s, u’) that maximizes Eaxp=(als,u’) log D(a, 8, u’)|+Eann(a|s) log (1 — D(a, s,u’))] by taking the point-wise derivative of this objective and equating it to zero: pπ(a | s, u′) 1 D∗(a, s, u′) − π(a | s) 1 1 − D∗(a, s, u′) = 0 (65) ⇒ pπ(a | s, u′) π(a | s) = D∗(a, s, u′) 1 − D∗(a, s, u′) (66)
2306.16803#112
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
113
As for any (a, b) ∈ R2 support x ∈ [0, 1] at a (s, u′) pairs concludes the proof. /0, the function f (x) = a log x + b log(1 − x) achieves its global maximum on a+b , the above maximum is a global maximum. Repeating this argument for all We can approximate the training objective of D by sampling a”), s”), u/) along the observed trajectories, while leveraging that we have access to the policy 7, leading to the following loss M M L= Ss —log D(a™, 8, ul(™) — Ss a(a’ | s™) log (1 — D(a’, s°™,u(™)) | (67) m=1 acA Numerical stability. Assuming D uses the sigmoid nonlinearity on its outputs, we can improve the numerical stability by computing the logarithm and sigmoid jointly. This results in log a(x) = — log(1 + exp(—z)) (68) log σ(x) = − log(1 + exp(−x)) log(1 − σ(x)) = −x − log(1 + exp(−x))
2306.16803#113
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
114
log σ(x) = − log(1 + exp(−x)) log(1 − σ(x)) = −x − log(1 + exp(−x)) log(1 — o(x)) = —x — log(1 + exp(—2)) (69) The probability ratio D/(1 − D) can then be computed with TE = exp [ — log(1 + exp(—a)) + « + log(1 + exp(—2))] = exp(x) (70) 10 (59) (60) (68) (69) D.3_ Successor representations. If we take u’ equal to the state s’, we can observe that the sum > ,..., p"(Si4x% = 8’ | 8, a) used in the contribution coefficients (3) is equal to the successor representation M(s, a, s’) introduced by Dayan [30]. Hence, we can leverage temporal differences to learn M(s,a, s’), either in a tabular setting [30], or in a deep feature learning setting [31]. Using the successor representations, we can construct the state-based contribution coefficients as w(s,a,s’) = M(s,a,8')/[Dacy T(G@ | s)M(s, a, s’)] — 1.
2306.16803#114
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
115
For a rewarding outcome encoding U different from S, we can recombine the state-based successor representations to obtain the required contribution coefficients using the following derivation. 8’ pπ(Uk = u′ | S0 = s, A0 = a) = pπ(Uk = u′, Sk = s′ | S0 = s, A0 = a) (71) k≥0 k>0s/ES p(U ′ = u′ | S′ = s′)pπ(Sk = s′ | S0 = s, A0 = a) k≥0 s′∈S = (72) pπ(Sk = s′ | S0 = s, A0 = a) = p(U ′ = u′ | S′ = s′) s′∈S k≥0 (73) = p(U ′ = u′ | S′ = s′)M (s, a, s′) (74) s′∈S where in the second line we used that S is fully predictive of U (c.f. Definition 2). Note that p(U ′ | S′) is policy independent, and can hence be approximated efficiently using offline data. We use this recombination of successor representations in our dynamic programming setup to compute the ground-truth contribution coefficients (c.f. App. E).
2306.16803#115
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
116
# E Experimental details and additional results Here, we provide additional details to all experiments performed in the manuscript and present additional results. # E.1 Algorithms Algorithm 1 shows pseudocode for the COCOA family of algorithms. E.2 Dynamic programming setup In order to delineate the different policy gradient methods considered in this work, we develop a framework to compute ground-truth policy gradients and advantages as well as ground-truth contribution coefficients. We will first show how we can recursively compute a quantity closely related to the successor representation using dynamic programming which then allows us to obtain expected advantages and finally expected policy gradients. E.2.1 Computing ground truth quantities To compute ground truth quantities, we assume that the environment reaches an absorbing, terminal state s∞ after T steps for all states s, i.e. pπ(ST = s∞|s0 = s) = 1. This assumption is satisfied by the linear key-to-door and tree environment we consider. As a helper quantity, we define the successor representation: T M(s,a,s',T) So" (Si s'|So = s, Ap = a) (75) k=l
2306.16803#116
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
117
T M(s,a,s',T) So" (Si s'|So = s, Ap = a) (75) k=l which captures the cumulative probability over all time steps of reaching state s′ when starting from state S0 = s, choosing initial action A0 = a and following the policy π thereafter. We can compute M (s, a, s′, T ) recursively as M (s, a, s′, t) = p(S1 = s′′|S0 = s, A0 = a) π(a′′|s′′)(1s′′=s′ + M (s′′, a′′, s′, t − 1)) s′′∈S a′′∈A (76) where 1 is the indicator function and M (s, a, s′, 0) is initialized to 0 everywhere. Then, the various quantities can be computed exactly as follows. 11 # Algorithm 1 COCOA
2306.16803#117
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
118
Require: Initial 7, h, episode length 7, number of episodes N, number of pretraining episodes M/, batch size L, number of pretraining update steps. 1: fori = 1 to M do > Collect random trajectories 2: Sample L trajectories T = {{;, Ar, Ri }/.9 }{_, from a random policy 3: Add trajectories to the buffer 4: end for 5: for 7 = 1 to K do > Pretrain reward features 6: Sample batch from buffer 7: Train reward features U = f(S, A) to predict rewards R via mean squared error 8: end for 9: for i = 1to N — M do 10: Sample L trajectories r = {{S;, Ar, Re}o }L, from 11: fort = 1toT' do > Update the contribution module 12: for k > tdo 13: Train h(A;|S;, Ux, 7) via cross-entropy loss on all trajectories T 14: end for 15: end for 16: fort = 1toT' do > Update the policy 17: for a €¢ Ado 18: Xa = VoT(a| St) ys, w(St, a, ire) Rive 19: end for 7 20: Vi V"(S0) = Dus Vo log m(At
2306.16803#118
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
120
# Contribution coefficients M(s,a,u',T) Vweat(@ | s)M(s,a',u',T) w(s,a,u’) -1 (77) where M (s, a, u′, T ) = M (s, a, s′, T ) π(a′ | s′) p(r′ | s′, a′)1f (s′,a′,r′)=u′, (78) s′∈S a′∈A r′ similar to the successor representations detailed in Section D. To discover the full state space S for an arbitrary environment, we perform a depth-first search through the environment transitions (which are deterministic in our setting), and record the encountered states. # Value function T V(s) = > m(a'|s)r(s,a’) + > > p" (S; = s'|So = 8,7) > m(a'|s')r(s’,a’) (79) aleA k=18'ES aeA # aleA k=18'ES # aeA
2306.16803#120
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
121
# aleA k=18'ES # aeA = π(a′|s)r(s, a′) + π(a | s)M (s, a, s′, T ) π(a′|s′)r(s′, a′) (80) a′∈A s′∈S a∈A a′∈A where r(s’,a’) = 30, r’p(r’ | s',a’) # Action-value function T Q(s, a) =r(s,a) + > > p" (Sz = 8'|So = 8, Ao = a) > m(a’|s’)r(s’,a’) (81) k=1s'eS ale s′∈S M (s, a, s′, T ) =r(s,a) + > M(s,a,s',T) > m(a’|s’)r(s',a’) (82) eS aeA where r(s’,a’) = >, r’p(r’ | 8’, a’) 12
2306.16803#121
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
122
where r(s’,a’) = >, r’p(r’ | 8’, a’) 12 E.2.2 Computing expected advantages As a next step we detail how to compute the true expected advantage given a potentially imperfect estimator of the contribution coefficient ˆw, the value function ˆV or the action-value function ˆQ. # COCOA # T T E,[A"(s,a)] =r(s,a) — > m(a’ | s)r(s,a") + Era (s,n) S wW(s,4, nm (83) aeA k=1 =r(s,a) — Ss r(a’ | s)r(s, +o rw, =u' | So =s)W(s,a,u’)r(u’) aleA uu k=1 (84) =r(s,a) — Sor m(a’ | s)r(s,a’) + Ss w(s,a,u')r(u’) Ss n(a’ | s)M(s,a’,u’,T) aleA wu aeA (85) where r(u’) = >>,
2306.16803#122
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
123
where r(u’) = >>, r′ r′p(r′ | u′) and M (s, a′, u′, T ) is defined as in E.2.1. where r(u’) = >>, r’p(r’ | wu’) and M(s, a’, u’, T) is defined as in E.2.1. # Advantage Ex [A"(s,a)] = Q(s,a) — V(s) (86) where Q is the ground truth action-value function obtained following E.2.1. # Q-critic E,[A*(s,a)] = Q(s,a) — 9 m(a' | 8)Q(s, 0’) (87) acA E.2.3 Computing expected policy gradients Finally, we show how we can compute the expected policy gradient, Eπ biased gradient estimator, given the expected advantage Eπ # [VeV7] of a potentially Finally, we show how we can compute the expected policy gradient, E,, [VeV7] of a potentially biased gradient estimator, given the expected advantage E, [A™(s, a)| :
2306.16803#123
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
124
T Ex[VaV"] = >_Esyet(son) >, Vor(als)E, [A (Sp, @)] (88) k=0 acA T = SO vr (S = 8|80) Ss Vor(als) 7[A™ (s,a)] (89) k=08ES acA Using automatic differentiation to obtain ∇θπ(a|s0), and the quantity M (s, a, s′, T ) we defined above, we can then compute the expected policy gradient. «[VoV"] => Vor(also)Ex [A (so, a)] + (90) acA Ss Ss ™(ao | 80) M(so, a0, , T) Ss Vor(als)E, [At (s, a)} QL sESaneA acA Computing the ground-truth policy gradient. To compute the ground-truth policy gradient, we can apply the same strategy as for Eq. 90 and replace the expected (possibly biased) advantage E, [At (so, a)| by the ground-truth advantage function, computed with the ground-truth action-value function (c.f. Section E.2.1). # E.3 Bias, variance and SNR metrics
2306.16803#124
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
125
# E.3 Bias, variance and SNR metrics To analyze the quality of the policy gradient estimators, we use the signal-to-noise ratio (SNR), which we further subdivide into variance and bias. A higher SNR indicates that we need fewer trajectories to estimate accurate policy gradients, hence reflecting better credit assignment. To obtain meaningful 13 scales, we normalize the bias and variance by the norm of the ground-truth policy gradient. ∥∇θV π∥2 θV π − ∇θV π∥2 θV π − Eπ[ ˆ∇· ∥∇θV π∥2 θV π] − ∇θV π∥2 ∥∇θV π∥2 VV" SNR _ VoV" || (92) E, [||VaV* — VoV7|?| _ [||VaV* E, [IVaV" - Ex[VeV"IIP] Variance 93) ial Bias = ∥Eπ[ ˆ∇· (94)
2306.16803#125
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
126
E, [IVaV" - Ex[VeV"IIP] Variance 93) ial Bias = ∥Eπ[ ˆ∇· (94) We compute the full expectations E, [VjV7] and ground-truth policy gradient VgV” by lever- aging our dynamic programming setup, while for the expectation of the squared differences E, [iivev™ - vov"|?| we use Monte Carlo sampling with a sample size of 512. We report the metrics in Decibels in all figures. Focusing on long-term credit assignment. As we are primarily interested in assessing the long- term credit assignment capabilities of the gradient estimators, we report the statistics of the policy gradient estimator corresponding to learning to pick up the key or not. Hence, we compare the SNR, variance and bias of a partial policy gradient estimator considering only t = 0 in the outer sum (corresponding to the state with the key) for all considered estimators (c.f. Table 1).
2306.16803#126
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
127
Shadow training. Policy gradients evaluated during training depend on the specific learning trajectory of the agent. Since all methods’ policy gradient estimators contain noise, these trajectories are likely different for the different methods. As a result, it is difficult to directly compare the quality of the policy gradient estimators, since it depends on the specific data generated by intermediate policies during training. In order to allow for a controlled comparison between methods independent of the noise introduced by different trajectories, we consider a shadow training setup in which the policy is trained with the Q-critic method using ground-truth action-value functions. We can then compute the policy gradients for the various estimators on the same shared data along this learning trajectory without using it to actually train the policy. We use this strategy to generate the results shown in Fig. 1B (right), Fig. 2B and Fig. 3C-D.
2306.16803#127
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
128
E.4 Linear key-to-door environment setup We simplify the key-to-door environment previously considered by various papers [e.g. 3, 4, 32], to a one-dimensional, linear track instead of the original two-dimensional grid world. This version still captures the difficulty of long-term credit assignment but reduces the computational burden allowing us to thoroughly analyze different policy gradient estimators with the aforementioned dynamic programming based setup. The environment is depicted in Fig. 3A. Here, the agent needs to pick up a key in the first time step, after which it engages in a distractor task of picking up apples which can either be to the left or the right of the agent and which can stochastically assume two different reward values. Finally, the agent reaches a door which it can open with the key to collect a treasure reward. In our simulations we represent states using a nine-dimensional vector, encoding the relative position on the track as a floating number, a boolean for each item that could be present at the current location (empty, apple left, apple right, door, key, treasure) as well as boolean indicating whether the agent has the key and a boolean for whether the agent does not have the key.
2306.16803#128
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
129
There are four discrete actions the agent can pick at every time step: pick the key, pick to the left, pick to the right and open the door. Regardless of the chosen action, the agent will advance to the next position in the next time step. If it has correctly picked up the key in the first time step and opened the door in the penultimate time step, it will automatically pick up the treasure, not requiring an additional action. Hung et al. [4] showed that the signal-to-noise ratio (SNR) of the REINFORCE policy gradient [6] for solving the main task of picking up the key can be approximated by ELWEENPV*}I2 COD rere Re) + Tr [V[VEEN'V= | no T2]] SNRREDF (95) 14
2306.16803#129
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
130
ELWEENPV*}I2 COD rere Re) + Tr [V[VEEN'V= | no T2]] SNRREDF (95) 14 with ˆ∇REINF V π the REINFORCE estimator (c.f. Table 1), C(θ) a reward-independent constant, T 2 θ the set of time steps corresponding to the distractor task, and Tr the trace of the covariance matrix of the REINFORCE estimator in an equivalent task setup without distractor rewards. Hence, we can adjust the difficulty of the task by increasing the number of distractor rewards and their variance. We perform experiments with environments of length L ∈ {20, 40, . . . , 100} choosing the reward values such that the total distractor reward remains approximately constant. Concretely, distractor rewards are sampled as rdistractor ∼ U({ 2 L }) and the treasure leads to a deterministic reward of rtreasure = 4 L . # E.5 Reward switching setup While learning to get the treasure in the key-to-door environment requires long term credit assignment as the agent needs to learn to 1) pickup the key and 2) open the door, learning to stop picking up the treasure does not require long term credit assignment, since the agent can simply learn to stop opening the door.
2306.16803#130
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
131
We therefore reuse the linear key-to-door environment of length L = 40, with the single difference that we remove the requirement to open the door in order to get the treasure reward. The agent thus needs to perform similar credit assignment to both get the treasure and stop getting the treasure. When applying reward switching, we simply flip the sign of the treasure reward while keeping the distractor reward unchanged. E.6 Tree environment setup We parameterize the tree environment by its depth d, the number of actions na determining the branching factor, and the state overlap os, defined as the number of overlapping children from two neighbouring nodes. The states are represented by two integers: i < d representing the current level in the tree, and j the position of the state within that level. The root node has state (i, j) = (0, 0), and the state transitions are deterministic and given by i ← i + 1 and j ← j(na − os) + a, where we represent the action by an integer 0 ≤ a < na. We assign each state-action pair a reward r ∈ {−2, −1, 0, 1, 2}, computed as # r(s,a) = ((idx(s) + ap + seed) mod n,) — n,//2 (96)
2306.16803#131
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
132
# r(s,a) = ((idx(s) + ap + seed) mod n,) — n,//2 (96) with the modulo operator mod, the number of reward values nr = 5, idx(s) a unique integer index for the state s = (i, j), a large prime number p, and an environment seed. To introduce rewarding outcome encodings with varying information content, we group state-action pairs corresponding to the same reward value in ng groups: u = (idx(s) + ap + seed) mod (nrng) − nr//2 (97) Environment parameters for Figure 2. For the experiment of Fig. 2, we use 6 actions and a depth of 4. We plotted the variance of the COCOA estimator for encodings U corresponding to ng = 4 and ng = 32.
2306.16803#132
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
133
E.7 Task interleaving environment setup We simplify the task interleaving environment of Mesnard et al. [3] in the following way: the environment is parameterized by the number of contexts C, the number of objects per context O, the maximum number of open contexts B, and finally the dying probability 1 − γ. In each context, a set of 2O objects is given, and a subset of O objects is randomly chosen as the rewarding objects for that context. The task consists in learning, for all contexts, to chose the right objects when presented a pair of objects, one rewarding and one non rewarding. While this is reminiscent of a contextual bandit setting, crucially, the agent receives the reward associated to its choice only at a later time, after potentially making new choices for other contexts, and receiving rewards from other contexts, resulting in an interleaved bandit problem. The agent must thus learn to disentangle the reward associated to each context in order to perform proper credit assignment. State and action At every timestep, there are 2 actions the agent can take: choose right, or choose left. The state is defined by the following variables: 15 sa z +4) —+ h(as,u) is 5 5 : rd ra g, Gq u— t log x(als) Figure 8: Schematic of the neural network architecture of the hindsight models
2306.16803#133
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
134
Figure 8: Schematic of the neural network architecture of the hindsight models • A set of open contexts , i.e. contexts where choices have been made but no reward associated to those choices has been received • For each open context, a bit (or key) indicating whether the correct choice has been made or not The context of the current task • A pair of objects currently being presented to the agent, if the current context is a query room. Each element of the pair is assigned to either the left side or the right side. Transition and reward The transition dynamics work as follows. If the agent is in an answer room, then regardless of the action, it receives a context-dependent reward if it also has the key of the corresponding context (i.e. it has made the correct choice when last encountering query room corresponding to the context). Otherwise, it receives 0 reward. Then, the current context is removed from the set of open contexts.
2306.16803#134
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
135
If the agent is in a query room, then the agent is presented with 2 objects, one rewarding and one not, and must choose the side of the rewarding object. Regardless, the agent receives 0 reward at this timestep, and the current context is added to the set of open contexts. Furthermore, if it did make the correct choice, a key associated to the current context is given. Finally, the next context is sampled. With a probability Co B (where Co is the number of open context), the agent samples uniformly one of the open contexts. Otherwise, the agent samples uniformly one of the non-open contexts. The 2 objects are then also sampled uniformly out of the O rewarding objects and O unrewarding objects. The rewarding object is placed either on the right or left side with uniform probability, and the unrewarding object placed on the other side. Crucially, there is a probability 1 − γ of dying, in which case the agent receives the reward but is put into a terminal state at the next timestep. Environment parameters For all our experiments, we choose C = 5, B = 3, O = 2 and γ = 0.95.
2306.16803#135
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
136
Environment parameters For all our experiments, we choose C = 5, B = 3, O = 2 and γ = 0.95. Visualization of contribution coefficient magnitudes. For the heatmaps shown in Fig 4C, we computed each entry as }7,.< 4 7(a|s+)-|w(se, @, Tt+x)| Where states s; and rewards r;;, are grouped by the context of their corresponding room. We average over all query-answer pairs grouped by contexts excluding query-query or answer-answer pairs and only consider answer rooms with non-zero rewards. E.8 Training details E.8.1 Architecture We use separate fully-connected ReLU networks to parameterize the policy, value function, and action-value function. For the policy we use two hidden layers of size 64, for the value function and action-value function respectively we use a single hidden layer of size 256. For the hindsight model of both HCA and COCOA we found a simple multilayer perceptron with the state s and rewarding outcome encoding u′ as inputs to perform poorly. We hypothesize that this is due to the policy dependence of the hindsight distribution creating a moving target during learning as the 16
2306.16803#136
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
137
16 policy changes. Leveraging Proposition 5, we therefore add the policy logits as an extra input to the hindsight network to ease tracking the changing policy. We found good performance using a simple hypernetwork, depicted in Fig. 8 that combines the policy logits with the state and hindsight object inputs through a multiplicative interaction, outputting a logit for each possible action. The multiplicative interaction denoted by ⊗ consists of a matrix multiplication of a matrix output by the network with the policy logits, and can be interpreted as selecting a combination of policy logits to add to the output channel. In order to allow gating with both positive and negative values, we use a gated version of the ReLU nonlinearity in the second layer which computes the difference between the split, rectified input effectively halving the output dimension: ReLU-g(x) = ReLU(x0:n/2) − ReLU(xn/2:n) (98) with n the dimension of x. Gating in combination with the multiplicative interaction is a useful inductive bias for the hindsight model, since for actions which have zero contribution towards the rewarding outcome u the hindsight distribution is equal to the policy. To increase the performance of our HCA+ baseline, we provide both the policy logits and one minus the policy logits to the multiplicative interaction.
2306.16803#137
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
138
E.8.2 Optimization For training of all models we use the AdamW optimizer with default parameters only adjusting the learning rates and clipping the global norm of the policy gradient. We use entropy regularization in combination with epsilon greedy to ensure sufficient exploration to discover the optimal policy. To estimate (action-) value functions we use TD(λ) treating each λ as a hyperparemeter. For all linear layers we use the default initialization of Haiku [82] where biases are initialized as zero and where ninput is the input weights are sample from a truncated Gaussian with standard deviation dimension of the respective layer and.
2306.16803#138
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
139
E.8.3 Reward features Learned reward features should both be fully predictive of the reward (c.f. Theorem 1), and contain as little additional information about the underlying state-action pair as possible (c.f. Theorem 3). We can achieve the former by training a network to predict the rewards given a state-action pair, and take the penultimate layer as the feature u = f (s, a). For the latter, there exist multiple approaches. When using a deterministic encoding u = f (s, a), we can bin the features such that similar features predicting the same reward are grouped together. When using a stochastic encoding p(U | S, A) we can impose an information bottleneck on the reward features U , enforcing the encoding to discard as much information about U as possible [83, 84]. We choose the deterministic encoding approach, as our Dynamic Programming routines require a deterministic encoding.4 We group the deterministic rewarding outcome encodings by discretizing the reward prediction network up to the penultimate layer.
2306.16803#139
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
140
Architecture. For the neural architecture of the reward prediction network we choose a linear model parameterized in the following way. The input is first multiplicatively transformed by an action-specific mask. The mask is parameterized by a vector of the same dimensionality as the input, and which is squared in order to ensure positivity. A ReLU nonlinearity with straight through gradient estimator is then applied to the transformation. Finally, an action-independent readout weight transforms the activations to the prediction of the reward. The mask parameters are initialized to 1. Weights of the readout layer are initialized with a Gaussian distribution of mean 0 and std 1√ where d d is the dimension of the input.
2306.16803#140
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
141
Loss function. We train the reward network on the mean squared error loss against the reward. To avoid spurious contributions (c.f. 3.3), we encourage the network to learn sparse features that discard information irrelevant to the prediction of the reward, by adding a L1 regularization term to all weights up to the penultimate layer with a strength of ηL1 = 0.001. The readout weights are trained with standard L2 regularization with strength ηL2 = 0.03. For the linear key-to-door experiments, we choose (ηL1, ηL2 ) = (0.001, 0.03). For the task interleaving environment, we choose (ηL1 , ηL2) = (0.05, 0.0003). All weights regularization are treated as weight decay, with the decay applied after the gradient update. 4In principle, our dynamic programming routines can be extended to allow for probabilistic rewarding outcome encodings and probabilistic environment transitions, which we leave to future work. 17 Table 2: The range of values swept over for each hyperparameter in a grid search for the linear key-to-door environment and task interleaving environment. Hyperparameter
2306.16803#141
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
142
17 Table 2: The range of values swept over for each hyperparameter in a grid search for the linear key-to-door environment and task interleaving environment. Hyperparameter Range lr_agent lr_hindsight lr_value lr_qvalue lr_features td_lambda_value td_lambda_qvalue entropy_reg Table 3: Hyperparameter values on the linear key-to-door environment. The best performing hyperparameters were identical accross all environment lengths. COCOA-r stands for COCOA-return, COCOA-f for COCOA-feature. Hyperparameter 0.0003 lr_agent 0.003 lr_hindsight - lr_value - lr_qvalue - lr_features - td_lambda_value td_lambda_qvalue - 0.0003 0.003 - - 0.003 - - 0.0003 0.0003 0.001 0.003 - - - - - - - 0.003 - - 0.9 - 0.001 - - 1. - 0.0003 - - - - - - 0.003 - - 0.01 - - 0.9 0.0001 0.001 - - - - # COCOA-r COCOA-f HCA+ Q-critic Advantage REINFORCE TrajCV HCA-return
2306.16803#142
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
143
Pretraining. To learn the reward features, we collect the first Bfeature mini-batches of episodes in a buffer using a frozen random policy. We then sample triplets (s, a, r) from the buffer and train with full-batch gradient descent using the Adam optimizer over Nfeature steps. Once trained, the reward network is frozen, and the masked inputs, which are discretized using the element-wise threshold function 1x>0.05, are used to train the contribution coefficient as in other COCOA methods. To ensure a fair comparison, other methods are already allowed to train the policy on the first Bfeature batches of episodes. For the linear key-to-door experiments, we chose (Bfeature, Nfeature) = (30, 20000). For the task interleaving environment, we chose (Bfeature, Nfeature) = (90, 30000).
2306.16803#143
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
144
E.8.4 Hyperparameters Linear key-to-door setup (performance) For all our experiments on the linear key-to-door en- vironment, we chose a batch size of 8, while using a batch size of 512 to compute the average performance, SNR, bias and variance metrics. We followed a 2-step selection procedure for selecting the hyperparameters: first, we retain the set of hyperparameters for which the environment can be solved for at least 90% of all seeds, given a large amount of training budget. An environment is considered to be solved for a given seed when the probability of picking up the treasure is above 90%. Then, out of all those hyperparameters, we select the one which maximizes the cumulative amount of treasures picked over 10000 training batches. We used 30 seeds for each set of hyperparameters to identify the best performing ones, then drew 30 fresh seeds for our evaluation. The range considered Table 4: Hyperparameter values on the task interleaving environment.
2306.16803#144
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
145
Table 4: Hyperparameter values on the task interleaving environment. Hyperparameter 0.0003 lr_agent 0.001 lr_hindsight - lr_value - lr_qvalue - lr_features 0.01 entropy_reg - td_lambda_value td_lambda_qvalue - 0.0003 0.001 - - 0.001 0.01 - - 0.0001 0.0003 0.001 0.001 - - - 0.01 - - - - 0.003 - 0.01 - 0.9 - 0.001 - - 0.01 1. - 0.001 - - - - 0.01 - - 0.001 - - 0.001 - 0.01 - 0.9 18 Table 5: The entropy regularization value selected for each environment length of the linear key-to- door environment. The values were obtained by linearly interpolating in log-log space between the best performing entropy regularization strength between environment length 20 and 100. 100 (reward-aliasing) Environment length 20 40 60 80 100 entropy_reg 0.03 0.0187 0.0142 0.0116 0.01 0.0062
2306.16803#145
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
146
20 40 60 80 100 entropy_reg 0.03 0.0187 0.0142 0.0116 0.01 0.0062 for our hyperparameter search can be found in Table 2, and the selected hyperparameters in Table 3. Surprisingly, we found that for the environment length considered, the same hyperparameters were performing best, with the exception of entropy_reg. For the final values of entropy_reg for each environment length, we linearly interpolated in log-log space between the best performing values, 0.03 for length 20 and 0.01 for 100. The values can be found in Table 5. Linear key-to-door setup (shadow training) For measuring the bias and variances of different methods in the shadow training setting, we used the best performing hyperparameters found in the performance setting. We kept a batch size of 8 for the behavior policy and shadow training, while using a batch size of 512 during evaluation.
2306.16803#146
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
147
Reward switching setup For the reward switching experiment, we chose hyperparameters follow- ing a similar selection procedure as in the linear key-to-door setup, but in the simplified door-less environment of length 40, without any reward switching. We found very similar hyperparameters to work well despite the absence of a door compared to the linear key-to-door setup. However, in order to ensure that noisy methods such as REINFORCE fully converged before the moment of switching the reward, we needed to train the models for 60000 training batches before the switch. To stabilize the hindsight model during this long training period, we added coarse gradient norm clipping by 1.. Furthermore, we found that a slightly decreased learning rate of 0.001 for the Q-critic performed best. Once the best hyperparameters were found, we applied the reward switching to record the speed of adaptation for each algorithm. We kept a batch size of 8 for the behavior policy and shadow training, while using a batch size of 512 during evaluation. Task interleaving experiment We choose a batch size of 8 and train on 10000 batches, while using a batch size of 512 to compute the average performance. We used 5 seeds for each set of hyperparameters to identify the best performing ones, then drew 5 fresh seeds for our evaluation. The selected hyperparameters can be found in Table 4. # F Additional results
2306.16803#147
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
148
# F Additional results We perform additional experiments to corroborate the findings of the main text. Specifically, we investigate how reward aliasing affects COCOA-reward and COCOA-feature and show that there is no significant difference in performance between HCA and our simplified version, HCA+. F.1 HCA vs HCA+ Our policy gradient estimator for U = S presented in Eq. 4 differs slightly from Eq. 2, the version originally introduced by Harutyunyan et al. [1], as we remove the need for a learned reward model r(s, a). We empirically verify that this simplification does not lead to a decrease in performance in Fig. 9. We run the longest and hence most difficult version of the linear key-to-door environment considered in our experiments and find no significant difference in performance between our simplified version (HCA+) and the original variant (HCA). # F.2 Learned credit assignment features allow for quick adaptation to a change of the reward function.
2306.16803#148
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
149
# F.2 Learned credit assignment features allow for quick adaptation to a change of the reward function. Disentangling rewarding outcomes comes with another benefit: When the reward value corresponding to a rewarding outcome changes, e.g. the treasure that was once considered a reward turns out to be poisonous, we only have to relearn the contribution coefficients corresponding to this rewarding outcome. This is in contrast to value-based methods for which potentially many state values are altered. Moreover, once we have access to credit assignment features u that encode rewarding outcomes which generalize to the new setting, the contribution coefficients remain invariant to changes in reward contingencies. For example, if we remember that we need to open the door with 19 o —— HCA+ HCA 2 So 0.84 2 xe) oO Bp 064 g 3 = 044 2 > & o 0.24 fa 0 T T T T 1 2k 4k 6k 8k 10k Episode Figure 9: No performance difference between the original HCA method and our modified variant HCA+. In the linear key-to-door environment of length 103 both the original HCA method with an additional learned reward model and our simplified version HCA+ perform similarly in terms of performance measured as the percentage of treasure reward collected.
2306.16803#149
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
150
06 Oracle —— COCOA-reward COCOA-feature Treasure reward collected 04) ___ cat ——~ Qeritic 0.21 —— Advantage ~~ REINFORCE reward nn switch ¢ t 56k 58k 60k 62k 64k Episode Figure 10: COCOA-feature quickly adapts the policy to a change of the reward function . Percentage of treasure reward collected before and after the change in reward contingencies. COCOA- feature quickly adapts to the new setting, as its credit assignment mechanisms generalize to the new setting, whereas COCOA-reward, Q-critic and Advantage need to adjust their models before appropriate credit assignment can take place. the key to get to the treasure, we can use this knowledge to avoid picking up the key and opening the door when the treasure becomes poisonous.
2306.16803#150
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
151
the key to get to the treasure, we can use this knowledge to avoid picking up the key and opening the door when the treasure becomes poisonous. To illustrate these benefits, we consider a variant of the linear key-to-door environment, where picking up the key always leads to obtaining the treasure reward, and where the treasure reward abruptly changes to be negative after 60k episodes. In this case, COCOA-feature learns to encode the treasure as the same underlying rewarding object before and after the switch, and hence can readily reuse the learned contribution coefficients adapting almost instantly (c.f. Fig. 10). COCOA-reward in contrast needs to encode the poisonous treasure as a new, previously unknown rewarding outcome. Since it only needs to relearn the contribution coefficients for the disentangled subtask of avoiding poison it nevertheless adapts quickly (c.f. Fig. 10). Advantage and Q-critic are both faced with relearning their value functions following the reward switch. In particular, they lack the property of disentangling different rewarding outcomes and all states that eventually lead to the treasure need to be updated. # F.3 Using returns instead of rewards as hindsight information
2306.16803#151
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
152
# F.3 Using returns instead of rewards as hindsight information In Appendix L, we discussed that HCA-return is a biased estimator in many relevant environments, including our key-to-door and task-interleaving environments. This explains its worse performance compared to our COCOA algorithms. To isolate the difference of using returns instead of rewards as 20 — COCOA-reward —— Advantage = ——~ HCA-return COCOA -feature REINFORCE Q-critic ST Heat —— Trajcv Counterfactual-return Treasure reward collected 2k 4k 6k 8k 10k Episode Figure 11: Performance of COCOA and baselines on the main task of picking up the treasure, measured as the average fraction of treasure rewards collected including the Counterfactual return method. hindsight information for constructing the contribution coefficients, we introduce the following new baseline: So Vor(a| 5) (MLS) -1)% (99) Soa n(a| Si) Similar to HCA-return, this Counterfactual Return variant leverages a hindsight distribution con- ditioned on the return. Different from HCA-return, we use this hindsight distribution to compute contribution coefficients that can evaluate all counterfactual actions. We prove that this estimator is unbiased.
2306.16803#152
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
153
Figure 11 shows that the performance of HCA-return and Counterfactual return lags far behind the performance of COCOA-reward on the key-to-door environment. This is due to the high variance of HCA-return and Counterfactual Return, and the biasedness of the former. As the return is a combina- tion of all the rewards of a trajectory, it cannot be used to disentangle rewarding outcomes, causing the variance of the distractor subtask to spill over to the subtask of picking up the treasure. # F.4 Investigation into the required accuracy of the hindsight models To quantify the relation between the approximation quality of the hindsight models and value functions upon the SNR of the resulting policy gradient estimate, we introduce the following experiment. We start from the ground-truth hindsight models and value functions computed with our dynamic programming setup (c.f. E.2), and introduce a persistent bias into the output of the model by elementwise multiplying the output with (1+σϵ), with ϵ zero-mean univariate Gaussian noise and σ ∈ {0.001, 0.003, 0.01, 0.03, 0.1, 0.3} a scalar of which we vary the magnitude in the experiment.
2306.16803#153
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
154
Figure 12 shows the SNR of the COCOA-reward estimator and the baselines, as a function of the perturbance magnitude log σ, where we average the results over 30 random seeds. We see that the sensitivity of the COCOA-reward estimator to its model quality is similar to that of Q-critic. Furthermore, the SNR of COCOA-reward remains of better quality compared to Advantage and HCA-state for a wide range of perturbations. # G Contribution analysis in continuous spaces and POMDPs In Section 3.3 we showed that HCA can suffer from spurious contributions, as state representations need to contain detailed features to allow for a capable policy. The same level of detail however is detrimental when assigning credit to actions for reaching a particular state, since at some resolution almost every action will lead to a slightly different outcome. Measuring the contribution towards a specific state ignores that often the same reward could be obtained in a slightly different state, hence overvaluing the importance of past actions. Many commonly used environments, such as pixel-based environments, continuous environments, and partially observable MDPs exhibit this property to a large extent due to their fine-grained state representations. Here, we take a closer look at how spurious contributions arise in continuous environments and Partially Observable MDPs (POMDPs). 21
2306.16803#154
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
155
21 Qetitic — HCA+ 20 — COCOA-reward. —— Advantage SNR (dB) r T T 0.001 2 5 0.01 2 5 01 2 Model Perturbation Figure 12: The improved SNR of the COCOA estimator is robust to imperfect contribution coefficients. The SNR of the COCOA-reward estimator and the baselines, as a function of the perturbance magnitude log σ, where we average the results over 30 random seeds. G.1 Spurious contributions in continuous state spaces When using continuous state spaces, pπ(Sk = s′ | s, a) represents a probability density function (PDF) instead of a probability. A ratio of PDFs p(X = x)/p(X = x′) can be interpreted as a likelihood ratio of how likely a sample X will be close to x versus x′. Using PDFs, the contribution coefficients with U = S used by HCA result in Desi P* (Stk =s'|S,=s, A; =a) -4 p "(Ay =a| 5; = 8,5’ = 8’) Vee P* (Ste =s' |S =s) m(a| 8) -1 w(s, a, 8’) (100)
2306.16803#155
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
156
and can be interpreted as a likelihood ratio of encountering a state ‘close’ to s′, starting from (s, a) versus starting from s and following the policy. The contribution coefficient will be high if pπ(S′ = s′ | s, a) has a high probability density at s′ conditioned on action a, compared to the other actions a′ ̸= a. Hence, the less pπ(Sk = s′ | s, a) and pπ(Sk = s′ | s, a′) overlap around s′, the higher the contribution coefficient for action a. The variance of the distribution pπ(Sk = s′ | s, a), and hence its room for overlap, is determined by how diffuse the environment transitions and policy are. For example, a very peaked policy and nearly deterministic environment transitions leads to a sharp distribution pπ(Sk = s′ | s, a). The randomness of the policy and environment is a poor measure for the contribution of an action in obtaining a reward. For example, consider the musician’s motor system, that learns to both: (i) play the violin and (ii) play the keyboard. We assume a precise control policy and near deterministic environment
2306.16803#156
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
157
motor system, that learns to both: (i) play the violin and (ii) play the keyboard. We assume a precise control policy and near deterministic environment dynamics, resulting in peaked distributions pπ(Sk = s′ | s, a). For playing the violin, a slight change in finger position significantly influences the pitch, hence the reward function sharply declines around the target state with perfect pitch. For playing the keyboard, the specific finger position matters to a lesser extent, as long as the correct key is pressed, resulting in a relatively flat reward function w.r.t. the precise finger positioning. The state-based contribution coefficients of Eq. 100 result in a high contribution for the action taken in the trajectory for both tasks. For playing violin, this could be a good approximation of what we intuitively think of as a ‘high contribution’, whereas for the keyboard, this overvalues the importance of the past action in many cases. From this example, it is clear that measuring contributions towards rewarding states in continuous state spaces can lead to spurious contributions, as the contributions mainly depend on how diffuse the policy and environment dynamics are, and
2306.16803#157
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
159
# G.2 Deterministic continuous reward functions can lead to excessive variance Proposition 2 shows that HCA degrades to REINFORCE in environments where each action sequence leads to distinct states. In Section G.1, we discussed that continuous state spaces exhibit this property to a large extend due to their fine-grained state representation. If each environment state has a unique reward value, COCOA can suffer from a similar degradation to HCA. As the rewarding outcome encoding U needs to be fully predictive of the reward (c.f. Theorem 1), it needs to have a distinct encoding for each unique reward, and hence for each state. 22 (100) With a continuous reward function this can become a significant issue, if the reward function is deterministic. Due to the fine-grained state representation in continuous state spaces, almost every action will lead to a (slightly) different state. When we have a deterministic, continuous reward function, two nearby but distinct states will often lead to nearby but distinct rewards. Hence, to a large extent, the reward contains the information of the underlying state, encoded in its infinite precision of a real value, resulting in COCOA degrading towards the high-variance HCA estimator.
2306.16803#159
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
160
The above problem does not occur when the reward function p(R | S, A) is probabilistic, even for continuous rewards. If the variance of p(R | S, A) is bigger than zero, the variance of p(S, A | R) is also bigger than zero under mild assumptions. This means that it is not possible to perfectly decode the underlying state using a specific reward value r, and hence COCOA does not degrade towards HCA in this case. Intuitively, different nearby states can lead to the same sampled reward, removing the spurious contributions. In the following section, we will use this insight to alleviate spurious contributions, even for deterministic reward functions. G.3 Smoothing can alleviate excess variance by trading variance for bias When each state has a unique reward, an unbiased COCOA estimator degrades to the high-variance HCA estimator, since U needs to be fully predictive of the reward, and hence each state needs a unique rewarding outcome encoding. Here, we propose two ways forward to overcome the excess variance of the COCOA estimator in this extreme setting that trade variance for bias.
2306.16803#160
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
161
Rewarding outcome binning. One intuitive strategy to avoid that each state has a unique rewarding outcome encoding U , is to group rewarding outcome encodings corresponding to nearby rewards together, resulting in discrete bins. As the rewarding outcome encodings now contain less details, the resulting COCOA estimator will have lower variance. However, as now the rewarding outcome encoding is not anymore fully predictive of the reward, the COCOA estimator will be biased. An intimately connected strategy is to change the reward function in the environment to a discretized reward function with several bins. Policy gradients in the new discretized environments will not be exactly equal to the policy gradients in the original environment, however for fine discretizations, we would not expect much bias. Similarly for binning the rewarding outcome encodings, when we group few nearby rewarding outcome encodings together, we expect a low bias, but also a low variance reduction. Increasing the amount of rewarding outcomes we group together we further lower the bias, at a cost of an increasing bias, hence creating a bias-variance trade-off.
2306.16803#161
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
162
Stochastic rewarding outcomes. We can generalize the above binning technique towards rewarding outcome encodings that bin rewards stochastically. In Section G.2, we discussed that when the reward function is probabilistic, the excessive variance problem is less pronounced, as different states can lead to the same reward. When dealing with a deterministic reward function, we can introduce stochasticity in the rewarding outcome encoding to leverage the same principle and reduce variance at the cost of increasing bias. For example, we can introduce the rewarding outcome encoding U ∼ N (R, σ), with N corresponding to a Gaussian distribution. As this rewarding outcome encoding is not fully predictive of the reward, it will introduce bias. We can control this bias-variance trade-off with the variance σ: a small sigma corresponds to a sharp distribution, akin to a fine discretization in the above binning strategy, and hence a low bias. Increasing σ leads to more states that could lead to the same rewarding outcome encoding, hence lowering the variance at the cost of increasing the bias.
2306.16803#162
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
163
Implicit smoothing by noise perturbations or limited capacity networks. Interestingly, the above strategy of defining stochastic rewarding outcomes is equivalent to adjusting the training scheme of the hindsight model h(a | s, u′) by adding noise to the input u′. Here, we take U equal to the (deterministic) reward R, but add noise to it while training the hindsight model. Due to the noise, the hindsight model cannot perfectly decode the action a from its input, resulting in the same effects as explicitly using stochastic rewarding outcomes. Adding noise to the input of a neural network is a frequently used regularization technique. Hence, an interesting route to investigate is whether other regularization techniques on the hindsight model, such as limiting its capacity, can result in a bias-variance trade-off for HCA and COCOA. Smoothing hindsight states for HCA. We can apply the same (stochastic) binning technique to hindsight states for HCA, creating a more coarse-grained state representation for backward credit assignment. However, the bias-variance trade-off is more difficult to control for states compared to rewards. The sensitivity of the reward function to the underlying state can be big in some regions of state-space, whereas small in others. Hence, a uniform (stochastic) binning of the state-space 23
2306.16803#163
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
165
Proposition 8 provides further insight on what the optimal smoothing or binning for HCA looks like. Consider the case where we have a discrete reward function with not too many distinct values compared to the state space, such that COCOA-reward is a low-variance gradient estimator. Proposition 8 shows that we can recombine the state-based hindsight distribution p*(Ap = a | So = s,S’ = s’) into the reward-based hindsight distribution p*(Ap = a | So = s, R’ = 1’), by leveraging a smoothing distribution )°,,<.5 p"(S’ = s’ | So = s, R! =r’). When we consider a specific hindsight state s”’, this means that we can obtain the low-variance COCOA-reward estimator, by considering all states S’ that could have lead to the same reward r(s’), instead of only s’. Hence, instead of using a uniform stochastic binning strategy with e.g. S’ ~ N(s’,o), a more optimal binning strategy is to take the reward structure into account through p”(S’ | So = s, R! = r(s”)) # Proposition 8.
2306.16803#165
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
166
# Proposition 8. p"(Ap =a| So =s, R' =1’) So p(s" s’| So =s,R! =r')p"(Ap =a| So = 8,5" = 8’) ES Proof. As A0 is d-separated from R′ conditioned on S′ and S0 (c.f. Fig. 6b) we can use the implied conditional independence to prove the proposition: pπ(A0 = a | S0 = s, R′ = r′) (102) = pπ(S′ = s′, A0 = a | S0 = s, R′ = r′) (103) # eS = pπ(S′ = s′ | S0 = s, R′ = r′)pπ(A0 = a | S0 = s, S′ = s′, R′ = r′) (104) # eS = pπ(S′ = s′ | S0 = s, R′ = r′)pπ(A0 = a | S0 = s, S′ = s′) (105) s′∈S
2306.16803#166
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
167
G.4 Continuous action spaces In the previous section, we discussed how continuous state spaces can lead to spurious contributions resulting high variance for the HCA estimator. Here, we briefly discuss how COCOA can be applied to continuous action spaces, and how Proposition 2 translates to this setting. Gradient estimator. We can adjust the COCOA gradient estimator of Eq. 4 towards continuous action spaces by replacing the sum over a′A by an integral over the action space A′. VV" (8) = So Vo log 1(At | St)Rz +[f da Vor(a | 51) Y w( (S1,4,Ur+%)Rizz (106) t>0 k>1 In general, computing this integral is intractable. We can approximate the integral by standard numerical integration methods, introducing a bias due to the approximation. Alternatively, we introduced in App. C.3 another variant of the COCOA gradient estimator that samples independent actions A′ from the policy instead of summing over the whole action space. This variant can readily be applied to continuous action spaces, resulting in VoV™ = S_ Vo log (Ar | St) Re + == iv 1S vp log x(a™ | S;) owl (Si,a", Uren) Ri+k (107) t>0 m k=1
2306.16803#167
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
168
where we sample M actions independently from am ∼ π(· | St). This gradient estimator is unbiased, but introduces extra variance through the sampling of actions. 24 (101) Spurious contributions. Akin to discrete action spaces, HCA in continuous action spaces can suffer from spurious contributions when distinct action sequences lead to unique states. In this case, previous actions can be perfectly decoded from the hindsight state, and we have that the probability density function pπ(A0 = a | S0 = s, S′ = s′) is equal to the Dirac delta function δ(a = a0), with a0 the action taken in the trajectory that led to s′. Substituting this Dirac delta function into the policy gradient estimator of Eq. 106 results in the high-variance REINFORCE estimator. When we use the COCOA estimator of Eq. 107 using action sampling, HCA will even have a higher variance compared to REINFORCE.
2306.16803#168
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
169
G.5 Partially Observable MDPs In many environments, agents do not have access to the complete state information s, but instead get observations with incomplete information. Partially observable Markov decision processes (POMDPs) formalize this case by augmenting MDPs with an observation space O. Instead of directly observing Markov states, the agent now acquires an observation ot at each time step. The probability of observing o in state s after performing action a is given by po(o | s, a). A popular strategy in deep RL methods to handle partial observability is learning an internal state xt that summarizes the observation history ht = {ot′+1, at′, rt′}t−1 t′=0, typically by leveraging recurrent neural networks [85–88]. This internal state is then used as input to a policy or value function. This strategy is intimately connected to estimating belief states [89]. The observation history ht provides us with information on what the current underlying Markov state st is. We can formalize this by introducing the belief state bt, which captures the sufficient statistics for the probability distribution over the Markov states s conditioned on the observation history ht. Theoretically, such belief states can be computed by doing Bayesian probability calculus using the environment transition model, reward model and observation model:
2306.16803#169
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
170
pB(s | bt) = p(St = s | Ht = ht) (108) Seminal work has shown that a POMDP can be converted to an MDP, using the belief states b as new Markov states instead of the original Markov states s [90]. Now the conventional optimal control techniques can be used to solve the belief-state MDP, motivating the use of standard RL methods designed for MDPs, combined with learning an internal state x. HCA suffers from spurious contributions in POMDPs. As the internal state x summarizes the complete observation history h, past actions can be accurately decoded based on h, causing HCA to degrade to the high-variance REINFORCE estimator (c.f. Proposition 2). Here, the tension between forming good state representations for enabling capable policies and good representations for backward credit assignment is pronounced clearly. To enable optimal decision-making, a good internal state x needs to encode which underlying Markov states the agent most likely occupies, as well as the corresponding uncertainty. To this end, the internal state needs to incorporate information about the full history. However, when using the same internal state for backward credit assignment, this leads to spurious contributions, as previous actions are encoded directly into the internal state.
2306.16803#170
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
172
p(S = x | b) = b1, p(S = y | b) = b2, p(S = z | b) = 1 − b1 − b2 We assume that bt is deterministically computed from the history ht, e.g. by an RNN. Now consider that at time step k, our belief state is bk = {0.5, 0.25} and we get a reward that resulted from the Markov state x. As the belief states are deterministically computed, the distribution pπ(B′ = b′ | b, a) is a Dirac delta distribution. Now consider that the action a does not influence the belief distribution over the rewarding state x, but only changes the belief distribution over the non-rewarding states (e.g. Bk = {0.5, 0.23} instead of Bk = {0.5, 0.25} when taking action a′ instead of a). As the Dirac delta distributions pπ(B′ = b′ | b, a) for different a do not overlap, we get a high contribution coefficient for the action a0 that was taken in the actual trajectory that led to the belief state b′, and a low contribution coefficient for all other actions, even though the actions did not influence the distribution over the rewarding state.
2306.16803#172
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
173
The spurious contributions originate from measuring contributions towards reaching a certain internal belief state, while ignoring that the same reward could be obtained in different belief states as well. 25 Adapting Proposition 8 towards these internal belief states provides further insight on the difference between HCA and COCOA-reward: pπ(A0 = a | X0 = x, R′ = r′) = pπ(X ′ = x′ | X0 = x, R′ = r′)pπ(A0 = a | X0 = x, X ′ = x′) X ′∈X (110) Here, we see that the contribution coefficients of COCOA-reward take into account that the same reward can be obtained while being in different internal belief states x′ by averaging over them, while HCA only considers a single internal state. # H Learning contribution coefficients from non-rewarding observations
2306.16803#173
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
174
# H Learning contribution coefficients from non-rewarding observations H.1 Latent learning Building upon HCA [1], we learn the contribution coefficients (3) by approximating the hindsight distribution pπ(a | s, u′). The quality of this model is crucial for obtaining low-bias gradient estimates. However, its training data is scarce, as it is restricted to learn from on-policy data, and rewarding observations in case the reward or rewarding object is used as encoding U . We can make the model that approximates the hindsight distribution less dependent on the policy by providing the policy logits as input (c.f. Proposition 5). Enabling COCOA-reward or COCOA-feature to learn from non-rewarding states is a more fundamental issue, as in general, the rewarding outcome encodings corresponding to zero rewards do not share any features with those corresponding to non-zero rewards. Empowering COCOA with latent learning is an interesting direction to make the learning of contribu- tion coefficients more sample efficient. We refer to latent learning as learning useful representations of the environment structure without requiring task information, which we can then leverage for learning new tasks more quickly [91]. In our setting, this implies learning useful representations in the hindsight model without requiring rewarding observations, such that when new rewards are encountered, we can quickly learn the corresponding contribution coefficients, leveraging the existing representations.
2306.16803#174
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
175
H.2 Optimal rewarding outcome encodings for credit assignment. Theorem 3 shows that the less information a rewarding outcome encoding U contains, the lower the variance of the corresponding COCOA gradient estimator (4). Latent learning on the other hand considers the sample-efficiency and corresponding bias of the learned contribution coefficients: when the hindsight model can learn useful representations with encodings U corresponding to zero rewards, it can leverage those representations to quickly learn the contribution coefficients for rewarding outcome encodings with non-zero rewards, requiring less training data to achieve a low bias. These two requirements on the rewarding outcome encoding U are often in conflict. To obtain a low-variance gradient estimator, U should retain as little information as possible while still being predictive of the reward. To enable latent learning for sample-efficient learning of the contribution coefficients, the hindsight model needs to pick up on recurring structure in the environment, requiring keeping as much information as possible in U to uncover the structural regularities. Using the state as rewarding outcome encoding is beneficial for enabling latent learning, as it contains rich information on the environment structure, but results in spurious contributions and a resulting high variance. Using the rewards or rewarding object as rewarding outcome encoding removes spurious contributions resulting in low variance, but renders latent learning difficult.
2306.16803#175
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
176
A way out of these conflicting pressures for an optimal rewarding outcome encoding is to use rewards or rewarding objects as the optimal encoding for low variance, but extract these contribution coefficients from models that allow for sample-efficient, latent learning. One possible strategy is to learn probabilistic world models [20, 21, 88] which can be done using both non-rewarding and rewarding observations, and use those to approximate the contribution coefficients of Eq. 3. Another strategy that we will explore in more depth is to learn hindsight models based on the state as rewarding outcome encoding to enable latent learning, but then recombine those learned hindsight models to obtain contribution coefficients using the reward or rewarding object as U . H.3 Counterfactual reasoning on rewarding states HCA results in spurious contributions because it computes contributions towards reaching a precise rewarding state, while ignoring that the same reward could be obtained in other (nearby) states. 26 aN O—®@ —@) ~@) @) ®) Figure 13: Schematic of the graphical model used in our variational information bottleneck approach.
2306.16803#176
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
177
26 aN O—®@ —@) ~@) @) ®) Figure 13: Schematic of the graphical model used in our variational information bottleneck approach. Proposition 9 (a generalization of Proposition 8), shows that we can reduce the spurious contributions of the state-based contribution coefficients by leveraging counterfactual reasoning on rewarding states. Here, we obtain contribution coefficients for a certain rewarding outcome encoding (e.g. the rewarding object) by considering which other states s′ could lead to the same rewarding outcome, and averaging over the corresponding coefficients w(s, a, s′). Proposition 9. Assuming S′ is fully predictive of U ′, we have that w(s,a,u’) > p'(S' =s' |S) =s,U' =w)w(s,a,s’) (11) ES
2306.16803#177
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
178
The proof follows the same technique as that of Proposition 8. Proposition 8) shows that it is possible to learn state-based contribution coefficients, enabling latent learning, and obtain a low-variance COCOA estimator by recombining the state-based contribution coefficients into coefficients with less spurious contributions, if we have access to the generative model pπ(S′ = s′ | S0 = s, U ′ = u′). This model embodies the counterfactual reasoning on rewarding states: ‘which other states s′ are likely considering I am currently in u′ and visited state s somewhere in the past’. In general, learning this generative model is as difficult or more difficult than approximating the hindsight distribution pπ(a | s, u′), as it requires rewarding observations as training data. Hence, it is not possible to directly use Proposition 9 to combine latent learning with low-variance estimators. In the following section, we propose a possible way forward to circumvent this issue.
2306.16803#178
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
179
H.4 Learning credit assignment representations with an information bottleneck Here, we outline a strategy where we learn a latent representation Z that retains useful information on the underlying states S, and crucially has a latent space structure such that pπ(Z ′ = z′ | S = s, U ′ = u′) is easy to approximate. Then, leveraging Proposition 9 (and replacing S′ by Z ′) allows us to learn a hindsight representation based on Z, enabling latent learning, while reducing the spurious contributions by counterfactual reasoning with pπ(z′ | s, u′). To achieve this, we use the Variational Information Bottleneck approach [83, 92], closely related to the β Variational Autoencoder [93]. Fig. 13 shows the graphical model with the relations between the various variables and encoding: we learn a probabilistic encoding p(Z | S, A; θ) parameterized by θ, and we assume that the latent variable Z is fully predictive of the rewarding outcome encoding U and reward R. We aim to maximize the mutual information I(Z ′; S′, A′ | S, U ′) under some information bottleneck. We condition on S and U ′, to end up later with a decoder and prior variational model that we can combine with Proposition 9.
2306.16803#179
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
180
Following rate-distortion theory, Alemi et al. [92] consider the following tractable variational bounds on the mutual information H − D ≤ I(Z ′; S′, A′ | S, U ′) ≤ Rate (112) with entropy H, distortion D and rate defined as follows: H = −ES′,A′,S,U ′[log pπ(S′, A′ | S, U ′)] (113) D = −ES,U ′ ES′,A′|S,U ′ dz′p(z′ | s′, a′; θ) log q(s′, a′ | s, z′; ψ) (114) Rate = Eg. [Egy,ays,ov [Di (p(2' | s’,0'; ®)\la(z" | 8, u's @))]] (115) where the decoder q(s′, a′ | s, z′; ψ) is a variational approximation to p(s′, a′ | s, z′), and the marginal q(z′ | s, u′; ϕ) is a variational approximation to the true marginal p(z′ | s, u′). The distortion D quantifies how well we can decode the state-action pair (s′, a′) from the encoding 27
2306.16803#180
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
181
z′, by using a decoder q(s′, a′ | s, z′; ψ) parameterized by ψ. The distortion is reminiscent of an autoencoder loss, and hence encourages the encoding p(z′ | s′, a′) to retain as much information as possible on the state-action pair (s′, a′). The rate measures the average KL-divergence between the encoder and variational marginal. In information theory, this rate measures the extra number of bits (or nats) required to encode samples from Z ′ with an optimal code designed for the variational marginal q(z′ | s, u′; ϕ). We can use the rate to impose an information bottleneck on Z ′. If we constrain the rate to Rate ≤ a with a some positive constant, we restrict the amount of information Z ′ can encode about (S′, A′), as p(z′ | s′, a′; θ) needs to remain close to the marginal q(z′ | s, u′; ϕ), quantified by the KL-divergence. We can maximize the mutual information I(Z ′; S′, A′ | S, U ′) under the information bottleneck by minimizing the
2306.16803#181
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
183
min θ,ϕ,ψ D + βRate (116) Here, the β parameter determines the strength of the information bottleneck. This formulation is equivalent to the β Variational Autoencoder [93], and for β = 1 we recover the Variational Autoencoder [94]. To understand why this information bottleneck approach is useful to learn credit assignment represen- tations Z, we examine the rate in more detail. We can rewrite the rate as Rate = Egy [Dx (p(2’ | s,u’)\lq(z' | 8, u's 6))] + (117) Es.u.z [Dxx(p(s’, a’ | s,2)|lp(s’,a’ | s,u'))] (118) a’ | s,2)|lp(s’,a’ | s,u'))]
2306.16803#183
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
184
a’ | s,2)|lp(s’,a’ | s,u'))] Hence, optimizing the information bottleneck objective (116) w.r.t. ϕ fits the variational marginal q(z′ | s, u′; ϕ) to the true marginal p(z′ | s, u′) induced by the encoder p(z′ | s′, a′; θ). Proposition 9 uses this true marginal to recombine coefficients based on Z into coefficients based on U . Hence, by optimizing the information bottleneck objective, we learn a model q(z′ | s, u′; ϕ) that approximates the true marginal, which we then can use to obtain contribution coefficients with less spurious contributions by leveraging Proposition 9. Furthermore, minimizing the rate w.r.t. θ shapes the latent space of Z ′ such that the true marginal p(z′ | s, u′) moves closer towards the variational marginal q(z′ | s, u′; ϕ).
2306.16803#184
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
185
In summary, the outlined information bottleneck approach for learning a credit assignment representa- tion Z is a promising way forward to merge the powers of latent learning with a low-variance COCOA gradient estimator. The distortion and rate terms in the information bottleneck objective of Eq. 116 represent a trade-off parameterized by β. Minimizing the distortion results in a detailed encoding Z ′ with high mutual information with the state-action pair (S′, A′), which can be leveraged for latent learning. Minimizing the rate shapes the latent space of Z ′ in such way that the true marginal p(z′ | s, u′) can be accurately approximated within the variational family of q(z′ | s, u′; ϕ), and fits the parameters ϕ resulting in an accurate marginal model. We can then leverage the variational marginal q(z′ | s, u′; ϕ) to perform counterfactual reasoning on the rewarding state encodings Z ′ according to Proposition 9, resulting in a low-variance COCOA-estimator. # I Contribution analysis and causality
2306.16803#185
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
186
# I Contribution analysis and causality I.1 Causal interpretation of COCOA COCOA is closely connected to causality theory [42], where the contribution coefficients (3) cor- respond to performing Do − interventions on the causal graph to estimate their effect on future rewards. To formalize this connection with causality, we need to use a new set of tools beyond conditional probabilities, as causation is in general not the same as correlation. We start with rep- resenting the MDP combined with the policy as a directed acyclic graphical model (c.f. Fig. 6a in App. C.1). In causal reasoning, we have two different ‘modes of operation’. On the one hand, we can use observational data, corresponding to ‘observing’ the states of the nodes in the graphical model, which is compatible with conditional probabilities measuring correlations. On the other hand, we can perform interventions on the graphical model, where we manually set a node, e.g. At to a specific value a independent from its parents St, and see what the influence is on the probability distributions of other nodes in the graph, e.g. Rt+1. These interventions are formalized with do-calculus [42], 28
2306.16803#186
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
187
Figure 14: Structural causal model (SCM) of the MDP. Squares represent deterministic functions and circles random variables. where we denote an intervention of putting a node Vi equal to v as Do(Vi = v), and can be used to investigate causal relations. Using the graphical model of Fig. 6b that abstracts time, we can use do-interventions to quantify the causal contribution of an action At = a taken in state St = s upon reaching the rewarding outcome U ′ = u′ in the future as, p"(U' =w' | S, = s,Do(A; = a)) Vaca 7(@| s)p*(U! =u! | 5; = 8, Do( A, = @)) -1. (119) wpo(s,a,w) = As conditioning on S' satisfies the backdoor criterion [42] for U' w.r.t. A, the interventional distribution p"(U’ = wu! | S; = s,Do(A; = a)) is equal to the observational distribution p™(U' =u |S =s,A, = a). Hence, the causal contribution coefficients of Eq. 119 are equal to the contribution coefficients of Eq. 3 used by COCOA.
2306.16803#187
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
188
I.2 Extending COCOA to counterfactual interventions Within causality theory, counterfactual reasoning goes one step further than causal reasoning, by incorporating the hindsight knowledge of the external state of the world in its reasoning. Applied to COCOA, this more advanced counterfactual reasoning would evaluate the query: ‘How does taking action a influence the probability of reaching a rewarding outcome, compared to taking alternative actions a′, given everything else remains the same’. To formalize the difference between causal and counterfactual reasoning, we need to convert the causal DAG of Figure 6a into a structural causal model (SCM), as shown in Figure 14. The SCM expresses all conditional distributions as deterministic functions with independent noise variables N , akin to the reparameterization trick [94]. In causal reasoning, we perform do-interventions on nodes, which is equivalent to cutting all incoming edges to a node. To compute the resulting probabilities, we still use the prior distribution over the noise variables N . Counterfactual reasoning goes one step further. First, it infers the posterior probability pπ({N } | T = τ ), with {N } the set of all noise variables, given the observed trajectory τ . Then it performs a Do-intervention as in causal reasoning. However now, to compute the resulting probabilities on the nodes, it uses the posterior noise distribution combined with the modified graph.
2306.16803#188
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
189
One possible strategy to estimate contributions using the above counterfactual reasoning is to explicitly estimate the posterior noise distribution and combine it with forward dynamics models to obtain the counterfactual probability of reaching specific rewarding outcomes. Leveraging the work of Buesing et al. [43] is a promising starting point for this direction of future work. Alternatively, we can avoid explicitly modeling the posterior noise distribution, by leveraging the hindsight distribution combined with the work of Mesnard et al. [3]. Here, we aim to learn a parameterized approximation h to the counterfactual hindsight distribution p= (Ar =a|S,=s,U'=u' ), where the 7-subscript indicates the counterfactual distribution incorporating the noise posterior. Building upon the approach of Mesnard et al. [3], we can learn h(a | s’, s, ®,(7)) to approximate the counterfactual hindsight distribution, with ®,(7) a summary statistic trained to encapsulate the information of the posterior noise distribution p({N} | T= 7). Mesnard et al. [3] show that such a summary statistic
2306.16803#189
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
190
to encapsulate the information of the posterior noise distribution p({N} | T= 7). Mesnard et al. [3] show that such a summary statistic (7) can be used to amortize the posterior noise estimation if it satisfies the following two conditions: (i) it needs to provide useful information for predicting the counterfactual hindsight distribution and (ii) it needs to be independent from the action A;. We can achieve both characteristics by (i) training h(a | s’, s, ®;(7)) on the hindsight action classification task and backpropagating the gradients to ®,(r7), and (ii) training ®, on an independence maximization loss Ly (s), which is minimized iff
2306.16803#190
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
191
29 # Table 6: Comparison of discounted policy gradient estimators Method Policy gradient estimator (VV (so)) REINFORCE = 0,39 7'Vo log m(Az | $1) aso 1 Rete Advantage Vis0 1' Ve log m(At | St) (Xiso Risk — Vz(St)) Q-critic Vso VY Vaca Vor (a | S:)Q5 (Si, 4) COCOA Visco VVe log w(Ar | St) Re + Vaca Vor (a | St) Desa Hwy (St, a, Vise) Ripe HCA+ Miso V' Va log w(At | St) Re + Vaca Vom (@ | St) Vys1 Hwy (St, @, Sere) Rege At and Φt are conditionally independent given St. An example is to minimize the KL divergence between π(at | st) and p(at | st, Φt(τ )) where the latter can be approximated by training a classifier q(at | st, Φt(τ )). Leveraging this approach to extend COCOA towards counterfactual interventions is an exciting direction for future research. # J Contribution analysis with temporal discounting
2306.16803#191
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]
2306.16803
192
# J Contribution analysis with temporal discounting As discussed in App. B, we can implicitly incorporate discounting into the COCOA framework by adjusting the transition probabilities to have a fixed probability of (1 − γ) of transitioning to the absorbing state s∞ at each time step. We can also readily incorporate explicit time discounting into the COCOA framework, which we discuss here. We consider now a discounted MDP defined as the tuple (S, A, p, p,, 7), with discount factor 7 € [0, 1], and (S,A,p, p,) as defined in Section 2. The discounted value function V7 (s) = Erxr(s,r) oreo 7’ Ri] and action value function Q%(s,a) = Eror(s,ar) reo Y Ri] are the expected discounted return when starting from state s, or state s and action a respectively. Table 6 shows the policy gradient estimators of V7 (s) for REINFORCE, Advantage and Q-critic. In a discounted MDP, it matters at which point in time we reach a rewarding outcome u, as the corresponding rewards are discounted. Hence, we adjust the contribution coefficients to
2306.16803#192
Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis
To make reinforcement learning more sample efficient, we need better credit assignment methods that measure an action's influence on future rewards. Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual Contribution Analysis (COCOA), a new family of model-based credit assignment algorithms. Our algorithms achieve precise credit assignment by measuring the contribution of actions upon obtaining subsequent rewards, by quantifying a counterfactual query: 'Would the agent still have reached this reward if it had taken another action?'. We show that measuring contributions w.r.t. rewarding states, as is done in HCA, results in spurious estimates of contributions, causing HCA to degrade towards the high-variance REINFORCE estimator in many relevant environments. Instead, we measure contributions w.r.t. rewards or learned representations of the rewarding objects, resulting in gradient estimates with lower variance. We run experiments on a suite of problems specifically designed to evaluate long-term credit assignment capabilities. By using dynamic programming, we measure ground-truth policy gradients and show that the improved performance of our new model-based credit assignment methods is due to lower bias and variance compared to HCA and common baselines. Our results demonstrate how modeling action contributions towards rewarding outcomes can be leveraged for credit assignment, opening a new path towards sample-efficient reinforcement learning.
http://arxiv.org/pdf/2306.16803
Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne
cs.LG, stat.ML
NeurIPS 2023 spotlight
null
cs.LG
20230629
20231031
[ { "id": "1912.02875" }, { "id": "1606.02396" }, { "id": "1907.08027" }, { "id": "2106.04499" }, { "id": "1507.06527" }, { "id": "2010.02193" }, { "id": "2011.01298" }, { "id": "2301.04104" }, { "id": "2103.04529" }, { "id": "1705.07177" }, { "id": "1910.07113" }, { "id": "2103.06224" }, { "id": "1906.09237" }, { "id": "1706.06643" }, { "id": "1804.00379" }, { "id": "1912.01603" }, { "id": "1807.01675" }, { "id": "2002.04083" }, { "id": "1911.08362" }, { "id": "1711.00464" }, { "id": "1912.06680" }, { "id": "1912.02877" }, { "id": "2102.12425" }, { "id": "1506.02438" }, { "id": "2007.01839" } ]