doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2306.16803 | 193 | In a discounted MDP, it matters at which point in time we reach a rewarding outcome u, as the corresponding rewards are discounted. Hence, we adjust the contribution coefficients to
es yp" Utse =u! | Si = 8, Ar = a) Ys YP" Oise = Ww | Se =) wy(s,a,uâ) 1 (120)
= γ (At = a | St = s, U â² = uâ²) pÏ Ï(a | s) â 1 (121)
γ (At = a | St = s, U â² = uâ²) similarly to Here, we define the discounted hindsight distribution pÏ the undiscounted hindsight distribution explained in App. C.1, but now using a different probability distribution on the time steps k: pβ(K = k) = (1 â β)βkâ1, where we take β = γ. We can readily extend Theorems 1-4 to the explicit discounted setting, by taking β = γ instead of the limit of β â 1, and using the discounted COCOA policy gradient estimator shown in Table 6. To approximate the discounted hindsight distribution pÏ incorporate the temporal discounting into the classification cross-entropy loss: | 2306.16803#193 | Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis | To make reinforcement learning more sample efficient, we need better credit
assignment methods that measure an action's influence on future rewards.
Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual
Contribution Analysis (COCOA), a new family of model-based credit assignment
algorithms. Our algorithms achieve precise credit assignment by measuring the
contribution of actions upon obtaining subsequent rewards, by quantifying a
counterfactual query: 'Would the agent still have reached this reward if it had
taken another action?'. We show that measuring contributions w.r.t. rewarding
states, as is done in HCA, results in spurious estimates of contributions,
causing HCA to degrade towards the high-variance REINFORCE estimator in many
relevant environments. Instead, we measure contributions w.r.t. rewards or
learned representations of the rewarding objects, resulting in gradient
estimates with lower variance. We run experiments on a suite of problems
specifically designed to evaluate long-term credit assignment capabilities. By
using dynamic programming, we measure ground-truth policy gradients and show
that the improved performance of our new model-based credit assignment methods
is due to lower bias and variance compared to HCA and common baselines. Our
results demonstrate how modeling action contributions towards rewarding
outcomes can be leveraged for credit assignment, opening a new path towards
sample-efficient reinforcement learning. | http://arxiv.org/pdf/2306.16803 | Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne | cs.LG, stat.ML | NeurIPS 2023 spotlight | null | cs.LG | 20230629 | 20231031 | [
{
"id": "1912.02875"
},
{
"id": "1606.02396"
},
{
"id": "1907.08027"
},
{
"id": "2106.04499"
},
{
"id": "1507.06527"
},
{
"id": "2010.02193"
},
{
"id": "2011.01298"
},
{
"id": "2301.04104"
},
{
"id": "2103.04529"
},
{
"id": "1705.07177"
},
{
"id": "1910.07113"
},
{
"id": "2103.06224"
},
{
"id": "1906.09237"
},
{
"id": "1706.06643"
},
{
"id": "1804.00379"
},
{
"id": "1912.01603"
},
{
"id": "1807.01675"
},
{
"id": "2002.04083"
},
{
"id": "1911.08362"
},
{
"id": "1711.00464"
},
{
"id": "1912.06680"
},
{
"id": "1912.02877"
},
{
"id": "2102.12425"
},
{
"id": "1506.02438"
},
{
"id": "2007.01839"
}
] |
2306.16803 | 194 | L,=E, SOS CE (hy(- | 51, Ur4n),6(a = Ae) | , (122) t>0 k>1
with CE the cross-entropy loss, hγ(· | St, Ut+k) the classification model that approximates the discounted hindsight distribution, and δ(a = At) a one-hot encoding of At.
# K Bootstrapping with COCOA
Here, we show how COCOA can be combined with n-step returns, and we make a correction to Theorem 7 of Harutyunyan et al. [1] which considers n-step returns for HCA.
Consider the graphical model of Fig. 15a where we model time, K, as a separate node in the graphical model (c.f. App. C.1). To model n-step returns, we now define the following prior probability on K,
30
(a) (b)
Figure 15: (a) Graphical model where we abstract time. (b) Graphical model implicitly used in the proof of Theorem 7 in Harutyunyan et al. [1].
parameterized by β:
pe n-1 £ ifl<k<n-1 1-6 g(kK =k)=4 # z= 6 23 Pn,o ) f else z= 6 1-8 (123)
Using pÏ | 2306.16803#194 | Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis | To make reinforcement learning more sample efficient, we need better credit
assignment methods that measure an action's influence on future rewards.
Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual
Contribution Analysis (COCOA), a new family of model-based credit assignment
algorithms. Our algorithms achieve precise credit assignment by measuring the
contribution of actions upon obtaining subsequent rewards, by quantifying a
counterfactual query: 'Would the agent still have reached this reward if it had
taken another action?'. We show that measuring contributions w.r.t. rewarding
states, as is done in HCA, results in spurious estimates of contributions,
causing HCA to degrade towards the high-variance REINFORCE estimator in many
relevant environments. Instead, we measure contributions w.r.t. rewards or
learned representations of the rewarding objects, resulting in gradient
estimates with lower variance. We run experiments on a suite of problems
specifically designed to evaluate long-term credit assignment capabilities. By
using dynamic programming, we measure ground-truth policy gradients and show
that the improved performance of our new model-based credit assignment methods
is due to lower bias and variance compared to HCA and common baselines. Our
results demonstrate how modeling action contributions towards rewarding
outcomes can be leveraged for credit assignment, opening a new path towards
sample-efficient reinforcement learning. | http://arxiv.org/pdf/2306.16803 | Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne | cs.LG, stat.ML | NeurIPS 2023 spotlight | null | cs.LG | 20230629 | 20231031 | [
{
"id": "1912.02875"
},
{
"id": "1606.02396"
},
{
"id": "1907.08027"
},
{
"id": "2106.04499"
},
{
"id": "1507.06527"
},
{
"id": "2010.02193"
},
{
"id": "2011.01298"
},
{
"id": "2301.04104"
},
{
"id": "2103.04529"
},
{
"id": "1705.07177"
},
{
"id": "1910.07113"
},
{
"id": "2103.06224"
},
{
"id": "1906.09237"
},
{
"id": "1706.06643"
},
{
"id": "1804.00379"
},
{
"id": "1912.01603"
},
{
"id": "1807.01675"
},
{
"id": "2002.04083"
},
{
"id": "1911.08362"
},
{
"id": "1711.00464"
},
{
"id": "1912.06680"
},
{
"id": "1912.02877"
},
{
"id": "2102.12425"
},
{
"id": "1506.02438"
},
{
"id": "2007.01839"
}
] |
2306.16803 | 195 | Using pÏ
n,β as the probability distribution induced by this graphical model, we have that
S=s,A=a) (124) pr.g(U' =u|S=s,A=a)= SO ph g(U! =u,K =k k
# k
= pn,β(K = k)pÏ n,β(U â² = u | S = s, A = a, K = k) (125) k
1-8 Ga = Fay BP" Uk = 1 | So = 5, Ao =a) (126) (Lâ pr")
We introduce the following contribution coefficients that we will use for n-step returns
_ Ch B*p"(Un =u! | So = 8, Ao = a) Wn,3(S, a, Uâ) = (127) nol Bkpt (Uy =u! | So =)
n,β(U â² = uâ² | S = s, A = a) pÏ pÏ n,β(U â² = uâ² | S = s) pÏ(Sn = sâ² | S0 = s, A0 = a) pÏ(Sn = sâ² | S0 = s) | 2306.16803#195 | Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis | To make reinforcement learning more sample efficient, we need better credit
assignment methods that measure an action's influence on future rewards.
Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual
Contribution Analysis (COCOA), a new family of model-based credit assignment
algorithms. Our algorithms achieve precise credit assignment by measuring the
contribution of actions upon obtaining subsequent rewards, by quantifying a
counterfactual query: 'Would the agent still have reached this reward if it had
taken another action?'. We show that measuring contributions w.r.t. rewarding
states, as is done in HCA, results in spurious estimates of contributions,
causing HCA to degrade towards the high-variance REINFORCE estimator in many
relevant environments. Instead, we measure contributions w.r.t. rewards or
learned representations of the rewarding objects, resulting in gradient
estimates with lower variance. We run experiments on a suite of problems
specifically designed to evaluate long-term credit assignment capabilities. By
using dynamic programming, we measure ground-truth policy gradients and show
that the improved performance of our new model-based credit assignment methods
is due to lower bias and variance compared to HCA and common baselines. Our
results demonstrate how modeling action contributions towards rewarding
outcomes can be leveraged for credit assignment, opening a new path towards
sample-efficient reinforcement learning. | http://arxiv.org/pdf/2306.16803 | Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne | cs.LG, stat.ML | NeurIPS 2023 spotlight | null | cs.LG | 20230629 | 20231031 | [
{
"id": "1912.02875"
},
{
"id": "1606.02396"
},
{
"id": "1907.08027"
},
{
"id": "2106.04499"
},
{
"id": "1507.06527"
},
{
"id": "2010.02193"
},
{
"id": "2011.01298"
},
{
"id": "2301.04104"
},
{
"id": "2103.04529"
},
{
"id": "1705.07177"
},
{
"id": "1910.07113"
},
{
"id": "2103.06224"
},
{
"id": "1906.09237"
},
{
"id": "1706.06643"
},
{
"id": "1804.00379"
},
{
"id": "1912.01603"
},
{
"id": "1807.01675"
},
{
"id": "2002.04083"
},
{
"id": "1911.08362"
},
{
"id": "1711.00464"
},
{
"id": "1912.06680"
},
{
"id": "1912.02877"
},
{
"id": "2102.12425"
},
{
"id": "1506.02438"
},
{
"id": "2007.01839"
}
] |
2306.16803 | 196 | n,β(A = a | S = s, U â² = uâ²) pÏ Ï(a | s) pÏ(A = a | S = s, Sn = sâ²) Ï(a | s)
= â 1 = â 1 (128)
wn(s, a, sâ²) = â 1 = â 1 (129)
Now we are ready to prove the n-step return theorem for the discounted MDP setting. We can recover the undiscounted setting by taking the limit of γ â 1â.
Theorem 10. Consider state s and action a for which it holds that Ï(a | s) > 0 and take β equal to the discount factor γ â [0, 1]. Furthermore, assume that the rewarding outcome encoding u = f (s, a, r) is fully predictive of the reward (c.f. Definition 2). Then the advantage AÏ(s, a) = QÏ(s, a) â V Ï(s) is equal to
n-1 AT(s,a) = r(s,a) â (8) + Ex(sn) | 32 7*tn,9(8, 4, Up) Rk +9"Wn(8, 4, $y )V"(Sp) k=1 | 2306.16803#196 | Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis | To make reinforcement learning more sample efficient, we need better credit
assignment methods that measure an action's influence on future rewards.
Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual
Contribution Analysis (COCOA), a new family of model-based credit assignment
algorithms. Our algorithms achieve precise credit assignment by measuring the
contribution of actions upon obtaining subsequent rewards, by quantifying a
counterfactual query: 'Would the agent still have reached this reward if it had
taken another action?'. We show that measuring contributions w.r.t. rewarding
states, as is done in HCA, results in spurious estimates of contributions,
causing HCA to degrade towards the high-variance REINFORCE estimator in many
relevant environments. Instead, we measure contributions w.r.t. rewards or
learned representations of the rewarding objects, resulting in gradient
estimates with lower variance. We run experiments on a suite of problems
specifically designed to evaluate long-term credit assignment capabilities. By
using dynamic programming, we measure ground-truth policy gradients and show
that the improved performance of our new model-based credit assignment methods
is due to lower bias and variance compared to HCA and common baselines. Our
results demonstrate how modeling action contributions towards rewarding
outcomes can be leveraged for credit assignment, opening a new path towards
sample-efficient reinforcement learning. | http://arxiv.org/pdf/2306.16803 | Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne | cs.LG, stat.ML | NeurIPS 2023 spotlight | null | cs.LG | 20230629 | 20231031 | [
{
"id": "1912.02875"
},
{
"id": "1606.02396"
},
{
"id": "1907.08027"
},
{
"id": "2106.04499"
},
{
"id": "1507.06527"
},
{
"id": "2010.02193"
},
{
"id": "2011.01298"
},
{
"id": "2301.04104"
},
{
"id": "2103.04529"
},
{
"id": "1705.07177"
},
{
"id": "1910.07113"
},
{
"id": "2103.06224"
},
{
"id": "1906.09237"
},
{
"id": "1706.06643"
},
{
"id": "1804.00379"
},
{
"id": "1912.01603"
},
{
"id": "1807.01675"
},
{
"id": "2002.04083"
},
{
"id": "1911.08362"
},
{
"id": "1711.00464"
},
{
"id": "1912.06680"
},
{
"id": "1912.02877"
},
{
"id": "2102.12425"
},
{
"id": "1506.02438"
},
{
"id": "2007.01839"
}
] |
2306.16803 | 197 | k=1 with r(s, a) the reward model and râ¢(s) = Y>,
# a Ï(a | s)r(s, a).
with r(s, a) the reward model and râ¢(s) = Y>, 7(a | s)r(s, a).
Proof. We start with the action-value function QÏ, and will subtract the value V Ï to obtain the result on the advantage function.
Q(s, a) = Errr(s,a,n) Soy Re (130) kD
# kD
=r(s,a)+ Ss So yp" (Re =râ |s,a)râ (131) MERKE1
31
n-1 =r(s,a)+ Ss Ss So yp" (Re =r',U,p =u! | s,a)râ+ (132) MER wEU k=1
râ²âR k=1 pÏ(Sn = sâ² | s, a)V Ï(sâ²)
yâ Ss p" (Sn = 8â | s,a)V"(sâ) (133) ES | 2306.16803#197 | Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis | To make reinforcement learning more sample efficient, we need better credit
assignment methods that measure an action's influence on future rewards.
Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual
Contribution Analysis (COCOA), a new family of model-based credit assignment
algorithms. Our algorithms achieve precise credit assignment by measuring the
contribution of actions upon obtaining subsequent rewards, by quantifying a
counterfactual query: 'Would the agent still have reached this reward if it had
taken another action?'. We show that measuring contributions w.r.t. rewarding
states, as is done in HCA, results in spurious estimates of contributions,
causing HCA to degrade towards the high-variance REINFORCE estimator in many
relevant environments. Instead, we measure contributions w.r.t. rewards or
learned representations of the rewarding objects, resulting in gradient
estimates with lower variance. We run experiments on a suite of problems
specifically designed to evaluate long-term credit assignment capabilities. By
using dynamic programming, we measure ground-truth policy gradients and show
that the improved performance of our new model-based credit assignment methods
is due to lower bias and variance compared to HCA and common baselines. Our
results demonstrate how modeling action contributions towards rewarding
outcomes can be leveraged for credit assignment, opening a new path towards
sample-efficient reinforcement learning. | http://arxiv.org/pdf/2306.16803 | Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne | cs.LG, stat.ML | NeurIPS 2023 spotlight | null | cs.LG | 20230629 | 20231031 | [
{
"id": "1912.02875"
},
{
"id": "1606.02396"
},
{
"id": "1907.08027"
},
{
"id": "2106.04499"
},
{
"id": "1507.06527"
},
{
"id": "2010.02193"
},
{
"id": "2011.01298"
},
{
"id": "2301.04104"
},
{
"id": "2103.04529"
},
{
"id": "1705.07177"
},
{
"id": "1910.07113"
},
{
"id": "2103.06224"
},
{
"id": "1906.09237"
},
{
"id": "1706.06643"
},
{
"id": "1804.00379"
},
{
"id": "1912.01603"
},
{
"id": "1807.01675"
},
{
"id": "2002.04083"
},
{
"id": "1911.08362"
},
{
"id": "1711.00464"
},
{
"id": "1912.06680"
},
{
"id": "1912.02877"
},
{
"id": "2102.12425"
},
{
"id": "1506.02438"
},
{
"id": "2007.01839"
}
] |
2306.16803 | 198 | yâ Ss p" (Sn = 8â | s,a)V"(sâ) (133) ES
n-1 =r(s,a)+ Ss Ss So p(kâ rvâ |U! =w)p" (UO, =u | s,a)râ4 (134) ER WEU k=1
râ²âR k=1 pÏ(Sn = sâ² | s, a)V Ï(sâ²)
yâ Ss p"(S, = 8 | s,a)V7(sâ) (135) s'cS
n-1 =r(s,a)+ Ss r(uâ) So yp" Ue =u'|s,a)+ 7" Ss p" (Sn = 8â | s,a)V7(sâ) weu k=1 seS
n-1 n-1 ) Kf (Uy =u|s,a) r(s,a)+ r(u yk p⢠U, = u' | s Lkr=1 yp K â t (137) (s, a) Ss ( 1 (Ux | s) am akâ p⢠(Ug =u | 8) weu | 2306.16803#198 | Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis | To make reinforcement learning more sample efficient, we need better credit
assignment methods that measure an action's influence on future rewards.
Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual
Contribution Analysis (COCOA), a new family of model-based credit assignment
algorithms. Our algorithms achieve precise credit assignment by measuring the
contribution of actions upon obtaining subsequent rewards, by quantifying a
counterfactual query: 'Would the agent still have reached this reward if it had
taken another action?'. We show that measuring contributions w.r.t. rewarding
states, as is done in HCA, results in spurious estimates of contributions,
causing HCA to degrade towards the high-variance REINFORCE estimator in many
relevant environments. Instead, we measure contributions w.r.t. rewards or
learned representations of the rewarding objects, resulting in gradient
estimates with lower variance. We run experiments on a suite of problems
specifically designed to evaluate long-term credit assignment capabilities. By
using dynamic programming, we measure ground-truth policy gradients and show
that the improved performance of our new model-based credit assignment methods
is due to lower bias and variance compared to HCA and common baselines. Our
results demonstrate how modeling action contributions towards rewarding
outcomes can be leveraged for credit assignment, opening a new path towards
sample-efficient reinforcement learning. | http://arxiv.org/pdf/2306.16803 | Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne | cs.LG, stat.ML | NeurIPS 2023 spotlight | null | cs.LG | 20230629 | 20231031 | [
{
"id": "1912.02875"
},
{
"id": "1606.02396"
},
{
"id": "1907.08027"
},
{
"id": "2106.04499"
},
{
"id": "1507.06527"
},
{
"id": "2010.02193"
},
{
"id": "2011.01298"
},
{
"id": "2301.04104"
},
{
"id": "2103.04529"
},
{
"id": "1705.07177"
},
{
"id": "1910.07113"
},
{
"id": "2103.06224"
},
{
"id": "1906.09237"
},
{
"id": "1706.06643"
},
{
"id": "1804.00379"
},
{
"id": "1912.01603"
},
{
"id": "1807.01675"
},
{
"id": "2002.04083"
},
{
"id": "1911.08362"
},
{
"id": "1711.00464"
},
{
"id": "1912.06680"
},
{
"id": "1912.02877"
},
{
"id": "2102.12425"
},
{
"id": "1506.02438"
},
{
"id": "2007.01839"
}
] |
2306.16803 | 199 | n Cael , Pp" (Sn = 3! | 8, a) mt Y p'(Sn =s' | s 7 V7(s (138) DP Gn = 81) Saas)
n=1 =r(s,a)+ Ss r(uâ) So rp" Ue =u! | s)(wWn,a(s,a,uâ) +1) (139) uleu k=l
+7" Ss p" (Sn = 8" | s)(wn(s, a, 8â) + 1)V7(s") (140) ES
# sâ²âS
(141)
where we use that Uâ is fully predictive of the reward Râ, and define r(uâ) = U' = wâ)râ By subtracting the value function, we get
# Voce p(kâ =
n-1 A(s,a) = r(s,a) ârâ¢(s) + Ss rw) So ap" (Ux =u! | s)wn,a(s,a, wuâ) (142) w eu k=1
k=1 pÏ(Sn = sâ² | s)wn(s, a, sâ²)V Ï(sâ²) | 2306.16803#199 | Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis | To make reinforcement learning more sample efficient, we need better credit
assignment methods that measure an action's influence on future rewards.
Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual
Contribution Analysis (COCOA), a new family of model-based credit assignment
algorithms. Our algorithms achieve precise credit assignment by measuring the
contribution of actions upon obtaining subsequent rewards, by quantifying a
counterfactual query: 'Would the agent still have reached this reward if it had
taken another action?'. We show that measuring contributions w.r.t. rewarding
states, as is done in HCA, results in spurious estimates of contributions,
causing HCA to degrade towards the high-variance REINFORCE estimator in many
relevant environments. Instead, we measure contributions w.r.t. rewards or
learned representations of the rewarding objects, resulting in gradient
estimates with lower variance. We run experiments on a suite of problems
specifically designed to evaluate long-term credit assignment capabilities. By
using dynamic programming, we measure ground-truth policy gradients and show
that the improved performance of our new model-based credit assignment methods
is due to lower bias and variance compared to HCA and common baselines. Our
results demonstrate how modeling action contributions towards rewarding
outcomes can be leveraged for credit assignment, opening a new path towards
sample-efficient reinforcement learning. | http://arxiv.org/pdf/2306.16803 | Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne | cs.LG, stat.ML | NeurIPS 2023 spotlight | null | cs.LG | 20230629 | 20231031 | [
{
"id": "1912.02875"
},
{
"id": "1606.02396"
},
{
"id": "1907.08027"
},
{
"id": "2106.04499"
},
{
"id": "1507.06527"
},
{
"id": "2010.02193"
},
{
"id": "2011.01298"
},
{
"id": "2301.04104"
},
{
"id": "2103.04529"
},
{
"id": "1705.07177"
},
{
"id": "1910.07113"
},
{
"id": "2103.06224"
},
{
"id": "1906.09237"
},
{
"id": "1706.06643"
},
{
"id": "1804.00379"
},
{
"id": "1912.01603"
},
{
"id": "1807.01675"
},
{
"id": "2002.04083"
},
{
"id": "1911.08362"
},
{
"id": "1711.00464"
},
{
"id": "1912.06680"
},
{
"id": "1912.02877"
},
{
"id": "2102.12425"
},
{
"id": "1506.02438"
},
{
"id": "2007.01839"
}
] |
2306.16803 | 201 | Finally, we can sample from this advantage function to obtain an n-step COCOA gradient estimator, akin to Theorem 1. Note that we require to learn the state-based contribution coefficients wn(s, a, sâ²) to bootstrap the value function into the n-step return, as the value function requires a Markov state sâ² as input instead of a rewarding outcome encoding uâ². Unfortunately, these state-based contribution coefficients will suffer from spurious contributions, akin to HCA, introducing a significant amount of variance into the n-step COCOA gradient estimator. We leave it to future research to investigate whether we can incorporate value functions into an n-step return, while using rewarding-outcome contribution coefficients w(s, a, uâ²) instead of state-based contribution coefficients wn(s, a, sâ²).
Learning the contribution coefficients. We can learn the contribution coefficients wβ,n(s, a, uâ²) with the same strategies as described in Section 3, but now with training data from n-step trajectories instead of complete trajectories. If we use a discount γ ̸= 1, we need to take this discount factor into account in the training distribution or loss function (c.f. App. J).
32
(136)
1" | | 2306.16803#201 | Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis | To make reinforcement learning more sample efficient, we need better credit
assignment methods that measure an action's influence on future rewards.
Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual
Contribution Analysis (COCOA), a new family of model-based credit assignment
algorithms. Our algorithms achieve precise credit assignment by measuring the
contribution of actions upon obtaining subsequent rewards, by quantifying a
counterfactual query: 'Would the agent still have reached this reward if it had
taken another action?'. We show that measuring contributions w.r.t. rewarding
states, as is done in HCA, results in spurious estimates of contributions,
causing HCA to degrade towards the high-variance REINFORCE estimator in many
relevant environments. Instead, we measure contributions w.r.t. rewards or
learned representations of the rewarding objects, resulting in gradient
estimates with lower variance. We run experiments on a suite of problems
specifically designed to evaluate long-term credit assignment capabilities. By
using dynamic programming, we measure ground-truth policy gradients and show
that the improved performance of our new model-based credit assignment methods
is due to lower bias and variance compared to HCA and common baselines. Our
results demonstrate how modeling action contributions towards rewarding
outcomes can be leveraged for credit assignment, opening a new path towards
sample-efficient reinforcement learning. | http://arxiv.org/pdf/2306.16803 | Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne | cs.LG, stat.ML | NeurIPS 2023 spotlight | null | cs.LG | 20230629 | 20231031 | [
{
"id": "1912.02875"
},
{
"id": "1606.02396"
},
{
"id": "1907.08027"
},
{
"id": "2106.04499"
},
{
"id": "1507.06527"
},
{
"id": "2010.02193"
},
{
"id": "2011.01298"
},
{
"id": "2301.04104"
},
{
"id": "2103.04529"
},
{
"id": "1705.07177"
},
{
"id": "1910.07113"
},
{
"id": "2103.06224"
},
{
"id": "1906.09237"
},
{
"id": "1706.06643"
},
{
"id": "1804.00379"
},
{
"id": "1912.01603"
},
{
"id": "1807.01675"
},
{
"id": "2002.04083"
},
{
"id": "1911.08362"
},
{
"id": "1711.00464"
},
{
"id": "1912.06680"
},
{
"id": "1912.02877"
},
{
"id": "2102.12425"
},
{
"id": "1506.02438"
},
{
"id": "2007.01839"
}
] |
2306.16803 | 202 | 32
(136)
1" |
Correction to Theorem 7 of Harutyunyan et al. [1]. Harutyunyan et al. [1] propose a theorem similar to Theorem 10, with two important differences. The first one concerns the distribution on K in the graphical model of Fig. 15a. Harutyunyan et al. [1] implicitly use this graphical model, but with a different prior probability distribution on K:
pHCA n,β (K = k) = βkâ1(1 â β) βnâ1 0 if 1 ⤠k ⤠n â 1 if k = n else (145)
The graphical model combined with the distribution on K defines the hindsight distribution n,β,HCA(A = a | S = s, Sâ² = sâ²). The second difference is the specific Q-value estimator Haru- pÏ tyunyan et al. [1] propose. They use the hindsight distribution pÏ n,β,HCA(A = a | S = s, Sâ² = sâ²) in front of the value function (c.f. Theorem 10), which considers that sâ² can be reached at any time step k â¼ pHCA n,β (k), whereas Theorem 10 uses wn(s, a, sâ²) which considers that sâ² is reached exactly at time step k = n. | 2306.16803#202 | Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis | To make reinforcement learning more sample efficient, we need better credit
assignment methods that measure an action's influence on future rewards.
Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual
Contribution Analysis (COCOA), a new family of model-based credit assignment
algorithms. Our algorithms achieve precise credit assignment by measuring the
contribution of actions upon obtaining subsequent rewards, by quantifying a
counterfactual query: 'Would the agent still have reached this reward if it had
taken another action?'. We show that measuring contributions w.r.t. rewarding
states, as is done in HCA, results in spurious estimates of contributions,
causing HCA to degrade towards the high-variance REINFORCE estimator in many
relevant environments. Instead, we measure contributions w.r.t. rewards or
learned representations of the rewarding objects, resulting in gradient
estimates with lower variance. We run experiments on a suite of problems
specifically designed to evaluate long-term credit assignment capabilities. By
using dynamic programming, we measure ground-truth policy gradients and show
that the improved performance of our new model-based credit assignment methods
is due to lower bias and variance compared to HCA and common baselines. Our
results demonstrate how modeling action contributions towards rewarding
outcomes can be leveraged for credit assignment, opening a new path towards
sample-efficient reinforcement learning. | http://arxiv.org/pdf/2306.16803 | Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne | cs.LG, stat.ML | NeurIPS 2023 spotlight | null | cs.LG | 20230629 | 20231031 | [
{
"id": "1912.02875"
},
{
"id": "1606.02396"
},
{
"id": "1907.08027"
},
{
"id": "2106.04499"
},
{
"id": "1507.06527"
},
{
"id": "2010.02193"
},
{
"id": "2011.01298"
},
{
"id": "2301.04104"
},
{
"id": "2103.04529"
},
{
"id": "1705.07177"
},
{
"id": "1910.07113"
},
{
"id": "2103.06224"
},
{
"id": "1906.09237"
},
{
"id": "1706.06643"
},
{
"id": "1804.00379"
},
{
"id": "1912.01603"
},
{
"id": "1807.01675"
},
{
"id": "2002.04083"
},
{
"id": "1911.08362"
},
{
"id": "1711.00464"
},
{
"id": "1912.06680"
},
{
"id": "1912.02877"
},
{
"id": "2102.12425"
},
{
"id": "1506.02438"
},
{
"id": "2007.01839"
}
] |
2306.16803 | 203 | To the best of our knowledge, there is an error in the proposed proof of Theorem 7 by Harutyunyan et al. [1] for which we could not find a simple fix. For the interested reader, we briefly explain the error. One indication of the problem is that for β â 1, all the probability mass of pHCA n,β (K = k) is concentrated n,β,HCA(A = a | S = s, Sâ² = sâ²) at k = n, hence the corresponding hindsight distribution pÏ considers only hindsight states sâ² encountered at time k = n. While this is not a mathematical error, it does not correspond to the intuition of a âtime independent hindsight distributionâ the authors provide. In the proof itself, a conditional independence relation is assumed that does not hold. The authors introduce a helper variable Z defined on the state space S, with a conditional distribution
O(z=s8') ifl<k<n-1 = 146 d(z\s') ifk=n (146) pu(Z = 2z|S'=s')= { | 2306.16803#203 | Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis | To make reinforcement learning more sample efficient, we need better credit
assignment methods that measure an action's influence on future rewards.
Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual
Contribution Analysis (COCOA), a new family of model-based credit assignment
algorithms. Our algorithms achieve precise credit assignment by measuring the
contribution of actions upon obtaining subsequent rewards, by quantifying a
counterfactual query: 'Would the agent still have reached this reward if it had
taken another action?'. We show that measuring contributions w.r.t. rewarding
states, as is done in HCA, results in spurious estimates of contributions,
causing HCA to degrade towards the high-variance REINFORCE estimator in many
relevant environments. Instead, we measure contributions w.r.t. rewards or
learned representations of the rewarding objects, resulting in gradient
estimates with lower variance. We run experiments on a suite of problems
specifically designed to evaluate long-term credit assignment capabilities. By
using dynamic programming, we measure ground-truth policy gradients and show
that the improved performance of our new model-based credit assignment methods
is due to lower bias and variance compared to HCA and common baselines. Our
results demonstrate how modeling action contributions towards rewarding
outcomes can be leveraged for credit assignment, opening a new path towards
sample-efficient reinforcement learning. | http://arxiv.org/pdf/2306.16803 | Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne | cs.LG, stat.ML | NeurIPS 2023 spotlight | null | cs.LG | 20230629 | 20231031 | [
{
"id": "1912.02875"
},
{
"id": "1606.02396"
},
{
"id": "1907.08027"
},
{
"id": "2106.04499"
},
{
"id": "1507.06527"
},
{
"id": "2010.02193"
},
{
"id": "2011.01298"
},
{
"id": "2301.04104"
},
{
"id": "2103.04529"
},
{
"id": "1705.07177"
},
{
"id": "1910.07113"
},
{
"id": "2103.06224"
},
{
"id": "1906.09237"
},
{
"id": "1706.06643"
},
{
"id": "1804.00379"
},
{
"id": "1912.01603"
},
{
"id": "1807.01675"
},
{
"id": "2002.04083"
},
{
"id": "1911.08362"
},
{
"id": "1711.00464"
},
{
"id": "1912.06680"
},
{
"id": "1912.02877"
},
{
"id": "2102.12425"
},
{
"id": "1506.02438"
},
{
"id": "2007.01839"
}
] |
2306.16803 | 204 | with the normalized discounted visit distribution d(z | sâ) = (1-7) 3, y*p7(Sp = z | So = 8). We can model this setting as the graphical model visualized in Fig. 15b. In the proof (last line on page 15 in the supplementary materials of Harutyunyan et al. [1]), the following conditional independence is used:
pÏ(A0 = a | S0 = s, Sâ² = sâ², Z = z) = pÏ(A0 = a | S0 = s, Sâ² = sâ²) (147) However, Fig. 15b shows that Sâ² is a collider on the path A0 â Sâ² â K â Z. Hence, by conditioning on Sâ² we open this collider path, making A0 dependent on Z conditioned on S0 and Sâ², thereby invalidating the assumed conditional independence. For example, if Z is different from Sâ², we know that K = n (c.f. Eq. 146), hence Z can contain information about action A0, beyond Sâ², as Sâ² ignores at which point in time sâ² is encountered.
# L HCA-return is a biased estimator in many relevant environments | 2306.16803#204 | Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis | To make reinforcement learning more sample efficient, we need better credit
assignment methods that measure an action's influence on future rewards.
Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual
Contribution Analysis (COCOA), a new family of model-based credit assignment
algorithms. Our algorithms achieve precise credit assignment by measuring the
contribution of actions upon obtaining subsequent rewards, by quantifying a
counterfactual query: 'Would the agent still have reached this reward if it had
taken another action?'. We show that measuring contributions w.r.t. rewarding
states, as is done in HCA, results in spurious estimates of contributions,
causing HCA to degrade towards the high-variance REINFORCE estimator in many
relevant environments. Instead, we measure contributions w.r.t. rewards or
learned representations of the rewarding objects, resulting in gradient
estimates with lower variance. We run experiments on a suite of problems
specifically designed to evaluate long-term credit assignment capabilities. By
using dynamic programming, we measure ground-truth policy gradients and show
that the improved performance of our new model-based credit assignment methods
is due to lower bias and variance compared to HCA and common baselines. Our
results demonstrate how modeling action contributions towards rewarding
outcomes can be leveraged for credit assignment, opening a new path towards
sample-efficient reinforcement learning. | http://arxiv.org/pdf/2306.16803 | Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne | cs.LG, stat.ML | NeurIPS 2023 spotlight | null | cs.LG | 20230629 | 20231031 | [
{
"id": "1912.02875"
},
{
"id": "1606.02396"
},
{
"id": "1907.08027"
},
{
"id": "2106.04499"
},
{
"id": "1507.06527"
},
{
"id": "2010.02193"
},
{
"id": "2011.01298"
},
{
"id": "2301.04104"
},
{
"id": "2103.04529"
},
{
"id": "1705.07177"
},
{
"id": "1910.07113"
},
{
"id": "2103.06224"
},
{
"id": "1906.09237"
},
{
"id": "1706.06643"
},
{
"id": "1804.00379"
},
{
"id": "1912.01603"
},
{
"id": "1807.01675"
},
{
"id": "2002.04083"
},
{
"id": "1911.08362"
},
{
"id": "1711.00464"
},
{
"id": "1912.06680"
},
{
"id": "1912.02877"
},
{
"id": "2102.12425"
},
{
"id": "1506.02438"
},
{
"id": "2007.01839"
}
] |
2306.16803 | 205 | # L HCA-return is a biased estimator in many relevant environments
L.1 HCA-return Besides HCA-state, Harutyunyan et al. [1] introduced HCA-return, a policy gradient estimator that leverages the hindsight distribution conditioned on the return:
A; | S' SS Vo log-n(as | $1) (1 - ae 148) t>0
When comparing this estimator with COCOA-reward, we see two important differences: (i) HCA- return uses a hindsight function conditioned on the return instead of individual rewards, and (ii) HCA-return leverages the hindsight function as an action-dependent baseline for a Monte Carlo policy gradient estimate, instead of using it for contribution coefficients to evaluate counterfactual actions. Importantly, the latter difference causes the HCA-return estimator to be biased in many environments of relevance, even when using the ground-truth hindsight distribution.
L.2 HCA-return can be biased An important drawback of HCA-return is that it can be biased, even when using the ground-truth hindsight distribution. Theorem 2 of Harutyunyan et al. 2019, considering the unbiasedness HCA- return, is valid under the assumption that for any possible random return Z for all possible trajectories
33 | 2306.16803#205 | Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis | To make reinforcement learning more sample efficient, we need better credit
assignment methods that measure an action's influence on future rewards.
Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual
Contribution Analysis (COCOA), a new family of model-based credit assignment
algorithms. Our algorithms achieve precise credit assignment by measuring the
contribution of actions upon obtaining subsequent rewards, by quantifying a
counterfactual query: 'Would the agent still have reached this reward if it had
taken another action?'. We show that measuring contributions w.r.t. rewarding
states, as is done in HCA, results in spurious estimates of contributions,
causing HCA to degrade towards the high-variance REINFORCE estimator in many
relevant environments. Instead, we measure contributions w.r.t. rewards or
learned representations of the rewarding objects, resulting in gradient
estimates with lower variance. We run experiments on a suite of problems
specifically designed to evaluate long-term credit assignment capabilities. By
using dynamic programming, we measure ground-truth policy gradients and show
that the improved performance of our new model-based credit assignment methods
is due to lower bias and variance compared to HCA and common baselines. Our
results demonstrate how modeling action contributions towards rewarding
outcomes can be leveraged for credit assignment, opening a new path towards
sample-efficient reinforcement learning. | http://arxiv.org/pdf/2306.16803 | Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne | cs.LG, stat.ML | NeurIPS 2023 spotlight | null | cs.LG | 20230629 | 20231031 | [
{
"id": "1912.02875"
},
{
"id": "1606.02396"
},
{
"id": "1907.08027"
},
{
"id": "2106.04499"
},
{
"id": "1507.06527"
},
{
"id": "2010.02193"
},
{
"id": "2011.01298"
},
{
"id": "2301.04104"
},
{
"id": "2103.04529"
},
{
"id": "1705.07177"
},
{
"id": "1910.07113"
},
{
"id": "2103.06224"
},
{
"id": "1906.09237"
},
{
"id": "1706.06643"
},
{
"id": "1804.00379"
},
{
"id": "1912.01603"
},
{
"id": "1807.01675"
},
{
"id": "2002.04083"
},
{
"id": "1911.08362"
},
{
"id": "1711.00464"
},
{
"id": "1912.06680"
},
{
"id": "1912.02877"
},
{
"id": "2102.12425"
},
{
"id": "1506.02438"
},
{
"id": "2007.01839"
}
] |
2306.16803 | 206 | 33
starting from state s, it holds that pÏ(a | s, z) > 0. This restrictive assumption requires that for each observed state-action pair (st, at) along a trajectory, all counterfactual returns Z resulting from a counterfactual trajectory starting from st (not including at) result in pÏ(at | st, Z) > 0. This implies that all returns (or rewarding states) reachable from st should also be reachable from (st, at).
Consider the following bandit setting as a simple example where the above assumption is not satisfied. The bandit has two arms, with a reward of 1 and â2, and a policy probability of 2 3 and 1 3 respectively. The advantage for both arms is 1 and â2. Applying eq. 6 from Harutyunyan et al. results in AÏ(s, a1) = (1 â 2 3 and AÏ(s, a2) = â2(1 â 1/3) = â4/3. This shows that the needed assumptions for an unbiased HCA-return estimator can be violated even in simple bandit settings.
# M Additional details | 2306.16803#206 | Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis | To make reinforcement learning more sample efficient, we need better credit
assignment methods that measure an action's influence on future rewards.
Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual
Contribution Analysis (COCOA), a new family of model-based credit assignment
algorithms. Our algorithms achieve precise credit assignment by measuring the
contribution of actions upon obtaining subsequent rewards, by quantifying a
counterfactual query: 'Would the agent still have reached this reward if it had
taken another action?'. We show that measuring contributions w.r.t. rewarding
states, as is done in HCA, results in spurious estimates of contributions,
causing HCA to degrade towards the high-variance REINFORCE estimator in many
relevant environments. Instead, we measure contributions w.r.t. rewards or
learned representations of the rewarding objects, resulting in gradient
estimates with lower variance. We run experiments on a suite of problems
specifically designed to evaluate long-term credit assignment capabilities. By
using dynamic programming, we measure ground-truth policy gradients and show
that the improved performance of our new model-based credit assignment methods
is due to lower bias and variance compared to HCA and common baselines. Our
results demonstrate how modeling action contributions towards rewarding
outcomes can be leveraged for credit assignment, opening a new path towards
sample-efficient reinforcement learning. | http://arxiv.org/pdf/2306.16803 | Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne | cs.LG, stat.ML | NeurIPS 2023 spotlight | null | cs.LG | 20230629 | 20231031 | [
{
"id": "1912.02875"
},
{
"id": "1606.02396"
},
{
"id": "1907.08027"
},
{
"id": "2106.04499"
},
{
"id": "1507.06527"
},
{
"id": "2010.02193"
},
{
"id": "2011.01298"
},
{
"id": "2301.04104"
},
{
"id": "2103.04529"
},
{
"id": "1705.07177"
},
{
"id": "1910.07113"
},
{
"id": "2103.06224"
},
{
"id": "1906.09237"
},
{
"id": "1706.06643"
},
{
"id": "1804.00379"
},
{
"id": "1912.01603"
},
{
"id": "1807.01675"
},
{
"id": "2002.04083"
},
{
"id": "1911.08362"
},
{
"id": "1711.00464"
},
{
"id": "1912.06680"
},
{
"id": "1912.02877"
},
{
"id": "2102.12425"
},
{
"id": "1506.02438"
},
{
"id": "2007.01839"
}
] |
2306.16803 | 207 | # M Additional details
M.1 Author contributions This paper was a collaborative effort of all shared first authors working closely together. To do this fact better justice we give an idea of individual contributions in the following. Alexander Meulemansâ. Original idea, conceptualizing the theory and proving the theorems, conceptual development of the algorithms, experiment design, implementation of main method and environments, debugging, neural network architecture design, running experiments, connecting the project to existing literature, writing of manuscript, first draft and supplementary materials, feedback to the figures. Simon Schugâ. Conceptual development of the algorithms, experiment design, implementation of main method, baselines and environments, neural network architecture design, debugging, tuning and running experiments, writing of manuscript, creation of figures, writing of supplementary materials. Seijin Kobayashiâ. Conceptual development of the algorithms, experiment design, implementation of environments, baselines, main method and Dynamic Programming-based ground-truth methods, debugging, tuning and running experiments, feedback to the manuscript, writing of supplementary materials.
Nathaniel Daw. Regular project meetings, conceptual input and feedback for method and ex- perimental design, connecting the project to existing literature, feedback to the manuscript and figures. | 2306.16803#207 | Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis | To make reinforcement learning more sample efficient, we need better credit
assignment methods that measure an action's influence on future rewards.
Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual
Contribution Analysis (COCOA), a new family of model-based credit assignment
algorithms. Our algorithms achieve precise credit assignment by measuring the
contribution of actions upon obtaining subsequent rewards, by quantifying a
counterfactual query: 'Would the agent still have reached this reward if it had
taken another action?'. We show that measuring contributions w.r.t. rewarding
states, as is done in HCA, results in spurious estimates of contributions,
causing HCA to degrade towards the high-variance REINFORCE estimator in many
relevant environments. Instead, we measure contributions w.r.t. rewards or
learned representations of the rewarding objects, resulting in gradient
estimates with lower variance. We run experiments on a suite of problems
specifically designed to evaluate long-term credit assignment capabilities. By
using dynamic programming, we measure ground-truth policy gradients and show
that the improved performance of our new model-based credit assignment methods
is due to lower bias and variance compared to HCA and common baselines. Our
results demonstrate how modeling action contributions towards rewarding
outcomes can be leveraged for credit assignment, opening a new path towards
sample-efficient reinforcement learning. | http://arxiv.org/pdf/2306.16803 | Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne | cs.LG, stat.ML | NeurIPS 2023 spotlight | null | cs.LG | 20230629 | 20231031 | [
{
"id": "1912.02875"
},
{
"id": "1606.02396"
},
{
"id": "1907.08027"
},
{
"id": "2106.04499"
},
{
"id": "1507.06527"
},
{
"id": "2010.02193"
},
{
"id": "2011.01298"
},
{
"id": "2301.04104"
},
{
"id": "2103.04529"
},
{
"id": "1705.07177"
},
{
"id": "1910.07113"
},
{
"id": "2103.06224"
},
{
"id": "1906.09237"
},
{
"id": "1706.06643"
},
{
"id": "1804.00379"
},
{
"id": "1912.01603"
},
{
"id": "1807.01675"
},
{
"id": "2002.04083"
},
{
"id": "1911.08362"
},
{
"id": "1711.00464"
},
{
"id": "1912.06680"
},
{
"id": "1912.02877"
},
{
"id": "2102.12425"
},
{
"id": "1506.02438"
},
{
"id": "2007.01839"
}
] |
2306.16803 | 208 | Nathaniel Daw. Regular project meetings, conceptual input and feedback for method and ex- perimental design, connecting the project to existing literature, feedback to the manuscript and figures.
Gregory Wayne. Senior project supervision, conceptualising of the project idea, conceptual develop- ment of the algorithms, regular project meetings, technical and conceptual feedback for method and experimental design, connecting the project to existing literature, feedback to the manuscript and figures.
M.2 Compute resources We used Linux workstations with Nvidia RTX 2080 and Nvidia RTX 3090 GPUs for development and conducted hyperparameter searches and experiments using 5 TPUv2-8, 5 TPUv3-8 and 1 Linux server with 8 Nvidia RTX 3090 GPUs over the course of 9 months. All of the final experiments presented take less than a few hours to complete using a single Nvidia RTX 3090 GPU. In total, we spent an estimated amount of 2 GPU months.
M.3 Software and libraries For the results produced in this paper we relied on free and open-source software. We implemented our experiments in Python using JAX [95, Apache License 2.0] and the Deepmind Jax Ecosystem [82, Apache License 2.0]. For experiment tracking we used wandb [96, MIT license] and for the generation of plots we used plotly [97, MIT license].
34 | 2306.16803#208 | Would I have gotten that reward? Long-term credit assignment by counterfactual contribution analysis | To make reinforcement learning more sample efficient, we need better credit
assignment methods that measure an action's influence on future rewards.
Building upon Hindsight Credit Assignment (HCA), we introduce Counterfactual
Contribution Analysis (COCOA), a new family of model-based credit assignment
algorithms. Our algorithms achieve precise credit assignment by measuring the
contribution of actions upon obtaining subsequent rewards, by quantifying a
counterfactual query: 'Would the agent still have reached this reward if it had
taken another action?'. We show that measuring contributions w.r.t. rewarding
states, as is done in HCA, results in spurious estimates of contributions,
causing HCA to degrade towards the high-variance REINFORCE estimator in many
relevant environments. Instead, we measure contributions w.r.t. rewards or
learned representations of the rewarding objects, resulting in gradient
estimates with lower variance. We run experiments on a suite of problems
specifically designed to evaluate long-term credit assignment capabilities. By
using dynamic programming, we measure ground-truth policy gradients and show
that the improved performance of our new model-based credit assignment methods
is due to lower bias and variance compared to HCA and common baselines. Our
results demonstrate how modeling action contributions towards rewarding
outcomes can be leveraged for credit assignment, opening a new path towards
sample-efficient reinforcement learning. | http://arxiv.org/pdf/2306.16803 | Alexander Meulemans, Simon Schug, Seijin Kobayashi, Nathaniel Daw, Gregory Wayne | cs.LG, stat.ML | NeurIPS 2023 spotlight | null | cs.LG | 20230629 | 20231031 | [
{
"id": "1912.02875"
},
{
"id": "1606.02396"
},
{
"id": "1907.08027"
},
{
"id": "2106.04499"
},
{
"id": "1507.06527"
},
{
"id": "2010.02193"
},
{
"id": "2011.01298"
},
{
"id": "2301.04104"
},
{
"id": "2103.04529"
},
{
"id": "1705.07177"
},
{
"id": "1910.07113"
},
{
"id": "2103.06224"
},
{
"id": "1906.09237"
},
{
"id": "1706.06643"
},
{
"id": "1804.00379"
},
{
"id": "1912.01603"
},
{
"id": "1807.01675"
},
{
"id": "2002.04083"
},
{
"id": "1911.08362"
},
{
"id": "1711.00464"
},
{
"id": "1912.06680"
},
{
"id": "1912.02877"
},
{
"id": "2102.12425"
},
{
"id": "1506.02438"
},
{
"id": "2007.01839"
}
] |
2306.15895 | 0 | 3 2 0 2
t c O 8 1 ] L C . s c [
2 v 5 9 8 5 1 . 6 0 3 2 : v i X r a
# Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias
Yue Yu1â, Yuchen Zhuang1â, Jieyu Zhang2â, Yu Meng3, Alexander Ratner2, Ranjay Krishna2, Jiaming Shen4, Chao Zhang1 1 Georgia Institute of Technology 2 University of Washington 3 University of Illinois at Urbana-Champaign 4 Google Research {yueyu, yczhuang, chaozhang}@gatech.edu, [email protected] {jieyuz2, ajratner, ranjay}@cs.washington.edu, [email protected]
# Abstract | 2306.15895#0 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16092 | 0 | 3 2 0 2
n u J 8 2 ] L C . s c [
1 v 2 9 0 6 1 . 6 0 3 2 : v i X r a
# ChatLaw: Open-Source Legal Large Language Model with Integrated External Knowledge Bases
# Jiaxi Cuiâ Peking University [email protected]
Zongjian Liâ Peking University [email protected]
# Yang Yan Peking University [email protected]
Bohua Chen Peking University [email protected]
# Li Yuanâ Peking University [email protected] | 2306.16092#0 | ChatLaw: Open-Source Legal Large Language Model with Integrated External Knowledge Bases | Large Language Models (LLMs) have shown the potential to revolutionize
natural language processing tasks in various domains, sparking great interest
in vertical-specific large models. However, unlike proprietary models such as
BloombergGPT and FinGPT, which have leveraged their unique data accumulations
to make strides in the finance domain, there hasn't not many similar large
language models in the Chinese legal domain to facilitate its digital
transformation.
In this paper, we propose an open-source legal large language model named
ChatLaw. Due to the importance of data quality, we carefully designed a legal
domain fine-tuning dataset. Additionally, to overcome the problem of model
hallucinations in legal data screening during reference data retrieval, we
introduce a method that combines vector database retrieval with keyword
retrieval to effectively reduce the inaccuracy of relying solely on vector
database retrieval. Furthermore, we propose a self-attention method to enhance
the ability of large models to overcome errors present in reference data,
further optimizing the issue of model hallucinations at the model level and
improving the problem-solving capabilities of large models. We also
open-sourced our model and part of the data at
https://github.com/PKU-YuanGroup/ChatLaw. | http://arxiv.org/pdf/2306.16092 | Jiaxi Cui, Zongjian Li, Yang Yan, Bohua Chen, Li Yuan | cs.CL | null | null | cs.CL | 20230628 | 20230628 | [
{
"id": "2302.13971"
}
] |
2306.15895 | 1 | # Abstract
Large language models (LLMs) have been recently leveraged as training data generators for various natural language processing (NLP) tasks. While previous research has explored different approaches to training models using generated data, they generally rely on simple class-conditional prompts, which may limit the diversity of the generated data and inherit systematic biases of LLM. Thus, we investigate training data generation with diversely attributed prompts (e.g., specifying attributes like length and style), which have the potential to yield diverse and attributed generated data. Our investigation focuses on datasets with high cardinality and diverse domains, wherein we demonstrate that attributed prompts outperform simple class-conditional prompts in terms of the resulting modelâs performance. Additionally, we present a comprehensive empirical study on data generation encompassing vital aspects like bias, diversity, and efficiency, and highlight three key observations: firstly, synthetic datasets generated by simple prompts exhibit significant biases, such as regional bias; secondly, attribute diversity plays a pivotal role in enhancing model performance; lastly, attributed prompts achieve the performance of simple class-conditional prompts while utilizing only 5% of the querying cost of ChatGPT associated with the latter. We release the generated dataset and used prompts to facilitate future research2.
# Introduction | 2306.15895#1 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16092 | 1 | Bohua Chen Peking University [email protected]
# Li Yuanâ Peking University [email protected]
Keyword LLM: Input: FIRES FLEA Bee 7 âNEDARERIDA, SRE RR (te) LACE BD FI)? Embedding y Embedding ChatLaw LLM Self-Suggestion: Vector DB: References: (SEA) SI BBAR RBARBRAT PERS RBA RAAT HR PE IRES Fa AHN (2015- RUASEASPAREBD _ 08-06) = +A MBBLIRILOE samnxene, same |g | BEAR ARAT BERG SORIA IRCA, HEA SeiMOWE, RT IRENA 2h, ARMIATF HIB... Reeponee: HUE (RBA RERAT HE Raa AES Fa ARM) ... ne,
Figure 1: ChatLaw Framework
# Abstract | 2306.16092#1 | ChatLaw: Open-Source Legal Large Language Model with Integrated External Knowledge Bases | Large Language Models (LLMs) have shown the potential to revolutionize
natural language processing tasks in various domains, sparking great interest
in vertical-specific large models. However, unlike proprietary models such as
BloombergGPT and FinGPT, which have leveraged their unique data accumulations
to make strides in the finance domain, there hasn't not many similar large
language models in the Chinese legal domain to facilitate its digital
transformation.
In this paper, we propose an open-source legal large language model named
ChatLaw. Due to the importance of data quality, we carefully designed a legal
domain fine-tuning dataset. Additionally, to overcome the problem of model
hallucinations in legal data screening during reference data retrieval, we
introduce a method that combines vector database retrieval with keyword
retrieval to effectively reduce the inaccuracy of relying solely on vector
database retrieval. Furthermore, we propose a self-attention method to enhance
the ability of large models to overcome errors present in reference data,
further optimizing the issue of model hallucinations at the model level and
improving the problem-solving capabilities of large models. We also
open-sourced our model and part of the data at
https://github.com/PKU-YuanGroup/ChatLaw. | http://arxiv.org/pdf/2306.16092 | Jiaxi Cui, Zongjian Li, Yang Yan, Bohua Chen, Li Yuan | cs.CL | null | null | cs.CL | 20230628 | 20230628 | [
{
"id": "2302.13971"
}
] |
2306.16564 | 1 | # Abstract
Generative Large language models (LLMs) have demonstrated remarkable capa- bilities for a wide range of applications, but reducing ungrounded or erroneous responses remains a major growth area. Unlike task-specific models, there lack an effective method to calibrate the confidence level of LLM responses to indicate potential errors and facilitate human-in-the-loop verification. An important source of calibration stems from expert-stipulated programmatic supervision, which is often available at low cost but has its own limitations such as noise and coverage. In this paper, we introduce a Pareto optimal self-supervision framework that can leverage available programmatic supervision to systematically calibrate LLM re- sponses by producing a risk score for every LLM response, without any additional manual efforts. This is accomplished by learning a harmonizer model to align with LLM output as well as other weak supervision sources. The model assigns higher risk scores to more uncertain LLM responses and facilitate error correction. Experiments on standard relation extraction and classification tasks in biomedical and general domains demonstrate that the proposed risk score is highly correlated with the actual LLM error rate. By using a dynamic prompting strategy based on the risk score, we observed significant accuracy improvement for off-the-shelf LLMs, boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model and GPT-4 results past SOTA supervised results on challenging evaluation datasets.
# Introduction | 2306.16564#1 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 2 | # Introduction
Large language models (LLMs) have demonstrated exceptional performance across a broad range of NLP tasks [5, 38, 24, 36, 37, 62]. In recent research, LLMs have been proposed as task-specific training data generators, particularly for text classification, aiming to alleviate the need for task- specific data and annotations [55, 13, 56, 30, 59, 7]. While these efforts have showcased the effectiveness of LLMs as data generators, the focus has primarily been on advancing the training stage, where the generated data are utilized to train task-specific models, leaving the upstream data generation process relatively unexplored. Notably, the prevailing approach employs a simple class- conditional prompt for querying LLMs during data generation, potentially limiting the diversity of the generated data [7, 51, 60] and inheriting systematic biases inherent in LLMs [65, 21]. We refer to this simple class-conditional prompt as SimPrompt, providing an example in Table 1. In this work, we ground the LLM to ChatGPT [37]3 for its ability to generate high-quality, human-like text [25], and consider four challenging topic classification tasks with high cardinality from various
âThese authors contributed equally to this work. 2The data and code is available on https://github.com/yueyu1030/AttrPrompt. 3We use gpt-3.5-turbo in our main experiments. | 2306.15895#2 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16092 | 2 | Figure 1: ChatLaw Framework
# Abstract
Large Language Models (LLMs) have shown the potential to revolutionize natural language processing tasks in various domains, sparking great interest in vertical- speciï¬c large models. However, unlike proprietary models such as BloombergGPT and FinGPT, which have leveraged their unique data accumulations to make strides in the ï¬nance domain, there hasnât not many similar large language models in the Chinese legal domain to facilitate its digital transformation. In this paper, we propose an open-source legal large language model named Chat- Law. Due to the importance of data quality, we carefully designed a legal domain
# âEqual Contribution. â Corresponding Author
Preprint. Under review.
ï¬ne-tuning dataset. Additionally, to overcome the problem of model hallucinations in legal data screening during reference data retrieval, we introduce a method that combines vector database retrieval with keyword retrieval to effectively re- duce the inaccuracy of relying solely on vector database retrieval. Furthermore, we propose a self-attention method to enhance the ability of large models to overcome errors present in reference data, further optimizing the issue of model hallucinations at the model level and improving the problem-solving capabili- ties of large models. We also open-sourced our model and part of the data at https://github.com/PKU-YuanGroup/ChatLaw.
1
# Introduction | 2306.16092#2 | ChatLaw: Open-Source Legal Large Language Model with Integrated External Knowledge Bases | Large Language Models (LLMs) have shown the potential to revolutionize
natural language processing tasks in various domains, sparking great interest
in vertical-specific large models. However, unlike proprietary models such as
BloombergGPT and FinGPT, which have leveraged their unique data accumulations
to make strides in the finance domain, there hasn't not many similar large
language models in the Chinese legal domain to facilitate its digital
transformation.
In this paper, we propose an open-source legal large language model named
ChatLaw. Due to the importance of data quality, we carefully designed a legal
domain fine-tuning dataset. Additionally, to overcome the problem of model
hallucinations in legal data screening during reference data retrieval, we
introduce a method that combines vector database retrieval with keyword
retrieval to effectively reduce the inaccuracy of relying solely on vector
database retrieval. Furthermore, we propose a self-attention method to enhance
the ability of large models to overcome errors present in reference data,
further optimizing the issue of model hallucinations at the model level and
improving the problem-solving capabilities of large models. We also
open-sourced our model and part of the data at
https://github.com/PKU-YuanGroup/ChatLaw. | http://arxiv.org/pdf/2306.16092 | Jiaxi Cui, Zongjian Li, Yang Yan, Bohua Chen, Li Yuan | cs.CL | null | null | cs.CL | 20230628 | 20230628 | [
{
"id": "2302.13971"
}
] |
2306.16564 | 2 | # Introduction
Generative Large language models (LLMs) have evolved to be impressively powerful in recent development [41], with Generative Pretrained Transformer (GPT) models becoming increasingly effective in their emerging abilities. The evolution from GPT-3 [3] to GPT-4 [22], as well as the emergence of other LLMs such as PaLM [4] and LLaMA [30], showed a significant leap in terms of natural language understanding and problem-solving abilities. The generative natural of the models makes them widely adapted to numerous application fields. However, as shown in [13], the problem of hallucination or erroneous response is still a major challenge when LLMs are applied to fields with high standard for accuracy and reliability, such as biomedical and healthcare domains.
Unfortunately, there lack systematic tools to efficiently identify hallucination, or estimate the con- fidence level of the output. As the outputs are free text, the intrinsic confidence score from the generative LLMs is often unavailable, or not well calibrated with respect to the desired target, espe- cially after applying reinforcement learning with human feedback (RLHF) [23] according to [22].
Preprint. Under review. | 2306.16564#2 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 3 | 37th Conference on Neural Information Processing Systems (NeurIPS 2023) Track on Datasets and Benchmarks.
Table 1: Prompt template for the NYT news dataset.
Method Prompt SimPrompt Suppose you are a news writer. Please generate a {topic-class} news in NYT. AttrPrompt
domains. Our investigation primarily revolves around assessing the bias and diversity present within the generated training set through the lens of data attributes. In particular, data attributes encompass multiple attribute dimensions and their corresponding attribute values, where the latter represent possible instantiations of the former. For example, an attribute value such as âshorter than 200 wordsâ could serve as an instantiation of the attribute dimension âlengthâ.
On one hand, we employ a trained attribute classifier to examine the attribute bias present in the dataset generated using SimPrompt. When analyzing the âlocationâ attribute in the NYT news dataset, we observe a striking bias towards âNorth Americaâ in the predicted values of the generated data, accounting for a significant majority (68.01%). In contrast, instances associated with âAfricaâ are remarkably rare, comprising only 0.69% of the dataset (100 times less prevalent than âNorth Americaâ). This regional bias exhibited in the generated dataset can pose substantial challenges when constructing reliable machine learning models [23, 6]. | 2306.15895#3 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16092 | 3 | 1
# Introduction
The continuous expansion and development of artiï¬cial intelligence have provided a fertile ground for the proliferation of large-scale language models. Models such as ChatGPT, GPT4 [5], LLaMA [7], Falcon [1], Vicuna [2], and ChatGLM [12] have demonstrated remarkable performance in various conventional tasks, unleashing tremendous potential for the ï¬eld of law. However, it is evident that acquiring high-quality, relevant, and up-to-date data is a crucial factor in the development of large language models. Therefore, the development of effective and efï¬cient open-source legal language models has become of paramount importance.
In the realm of artiï¬cial intelligence, the development of large-scale models has permeated various domains such as healthcare, education, and ï¬nance: BloombergGPT [9], FinGPT [10], Huatuo [8], ChatMed [14], These models have demonstrated their utility and impact in tackling complex tasks and generating valuable insights. However, the ï¬eld of law, with its inherent importance and demand for accuracy, stands as a domain that necessitates dedicated research and development of a specialized legal model. | 2306.16092#3 | ChatLaw: Open-Source Legal Large Language Model with Integrated External Knowledge Bases | Large Language Models (LLMs) have shown the potential to revolutionize
natural language processing tasks in various domains, sparking great interest
in vertical-specific large models. However, unlike proprietary models such as
BloombergGPT and FinGPT, which have leveraged their unique data accumulations
to make strides in the finance domain, there hasn't not many similar large
language models in the Chinese legal domain to facilitate its digital
transformation.
In this paper, we propose an open-source legal large language model named
ChatLaw. Due to the importance of data quality, we carefully designed a legal
domain fine-tuning dataset. Additionally, to overcome the problem of model
hallucinations in legal data screening during reference data retrieval, we
introduce a method that combines vector database retrieval with keyword
retrieval to effectively reduce the inaccuracy of relying solely on vector
database retrieval. Furthermore, we propose a self-attention method to enhance
the ability of large models to overcome errors present in reference data,
further optimizing the issue of model hallucinations at the model level and
improving the problem-solving capabilities of large models. We also
open-sourced our model and part of the data at
https://github.com/PKU-YuanGroup/ChatLaw. | http://arxiv.org/pdf/2306.16092 | Jiaxi Cui, Zongjian Li, Yang Yan, Bohua Chen, Li Yuan | cs.CL | null | null | cs.CL | 20230628 | 20230628 | [
{
"id": "2302.13971"
}
] |
2306.16564 | 3 | Preprint. Under review.
As a compromise, recently researchers resorted to heuristic approaches such as querying the LLM in various ways and estimate the correctness of the answer (e.g. [19]). However, these types of approaches are computationally expensive (multiple LLM inferences), biased by the LLM itself (information from inside the model), and not quantitative.
To address these issues, we propose a novel approach to calibrate LLM outputs and automatically identify error responses. As an early attempt to tackle the LLM error problem, we restrict ourselves to problems where the expected output can be categorized, such as classification in the simplest setting. The intuitions behind our method are:
1. distilling LLM to smaller networks leads to calibration via implicit label smoothing, and
2. incorporating independent noisy signals are guaranteed to enhance LLM performance.
Theoretically analysis of these intuitions are provided in Section 3.2. | 2306.16564#3 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 4 | On the other hand, we explore the influence of attribute diversity on the downstream model per- formance. Specifically, we leverage ChatGPT to generate attributed data by incorporating desired attributes as constraints in the prompts. By comparing the performance of models trained on datasets generated using prompts with random attributes against those with fixed attributes, we observe a substantial underperformance of the latter, uncovering the importance of attribute diversity of the generated dataset.
To alleviate attribute biases and enhance the attribute diversity of the generated data, we propose to generate data with diversely attributed prompts. For a given classification task, we start by identifying attribute dimensions and their corresponding attribute values in an interactive, semi-automated process facilitated by the LLM. Subsequently, we generate diverse prompts by combining attributes randomly, replacing the simple class-conditional prompt typically used for querying data from the LLM. We refer to these diversely attributed prompts as AttrPrompt. An example of such prompts can be found in Table 1, where the LLM is instructed to generate training data based on attributes such as location and style. | 2306.15895#4 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16092 | 4 | Law plays a pivotal role in shaping societies, governing human interactions, and upholding justice. Legal professionals rely on accurate and up-to-date information to make informed decisions, interpret laws, and provide legal counsel. The complexities of legal language, nuanced interpretations, and the ever-evolving nature of legislation present unique challenges that require tailored solutions.
However, when it comes to legal issues, there is often a phenomenon of hallucination and nonsensical outputs, even with the most advanced model like GPT4. People tend to believe that ï¬ne-tuning a model with speciï¬c domain knowledge would yield satisfactory results. However, in reality, this is not the case with early legal LLM (LawGPT), as there are still many instances of hallucination and unreliable outputs.
We initially recognized the need for a Chinese legal LLM. However, at the time, there were no commercially available Chinese models surpassing the scale of 13 billion parameters. Therefore, we built upon the foundation of OpenLLAMA, a commercially viable model, by expanding the Chinese vocabulary and incorporating training data from sources like MOSS. This allowed us to create a foundational Chinese language model. Subsequently, we incorporated legal-speciï¬c data to train our legal modelââChatLaw.
The key contributions of this paper are as follows: | 2306.16092#4 | ChatLaw: Open-Source Legal Large Language Model with Integrated External Knowledge Bases | Large Language Models (LLMs) have shown the potential to revolutionize
natural language processing tasks in various domains, sparking great interest
in vertical-specific large models. However, unlike proprietary models such as
BloombergGPT and FinGPT, which have leveraged their unique data accumulations
to make strides in the finance domain, there hasn't not many similar large
language models in the Chinese legal domain to facilitate its digital
transformation.
In this paper, we propose an open-source legal large language model named
ChatLaw. Due to the importance of data quality, we carefully designed a legal
domain fine-tuning dataset. Additionally, to overcome the problem of model
hallucinations in legal data screening during reference data retrieval, we
introduce a method that combines vector database retrieval with keyword
retrieval to effectively reduce the inaccuracy of relying solely on vector
database retrieval. Furthermore, we propose a self-attention method to enhance
the ability of large models to overcome errors present in reference data,
further optimizing the issue of model hallucinations at the model level and
improving the problem-solving capabilities of large models. We also
open-sourced our model and part of the data at
https://github.com/PKU-YuanGroup/ChatLaw. | http://arxiv.org/pdf/2306.16092 | Jiaxi Cui, Zongjian Li, Yang Yan, Bohua Chen, Li Yuan | cs.CL | null | null | cs.CL | 20230628 | 20230628 | [
{
"id": "2302.13971"
}
] |
2306.16564 | 4 | 2. incorporating independent noisy signals are guaranteed to enhance LLM performance.
Theoretically analysis of these intuitions are provided in Section 3.2.
The proposed self-supervision framework is illustrated in Fig. 1. Given an input instance (top), the LLM is prompted to output its answer (left side). We leverage noisy weak supervision signals such as knowledge bases (KB) and regular expressions (RX) to produce multiple heuristic labels (right side). As the LLM and weak sources may disagree among themselves, the challenging goal is to train a harmonizer network h(x) providing a probabilistic estimate of the answerâs correctness. In this paper, we focus on how to obtain a well-calibrated network utilizing LLM response and the weak sources. Note that the entire process does not involve any human reviewing or labeling.
Vasodilation of large and small coronary vessels and hypotension induced by cromakalim and were not affected by prior combined beta adrenergic and muscarinic receptors blockade but di duced was abolished. drug (entity 1) MB disease (entity 2) Negative Positive S Negative. tachycardia (entity 2)
Figure 1: Self-supervision framework to calibrate LLM output and automatically detect error. | 2306.16564#4 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 5 | On the four classification tasks, we empirically evaluate the generated datasets by measuring the performance of models trained using two scenarios: 1) solely on the generated dataset, and 2) on a merged dataset comprising the real training set and the generated set. In both scenarios, the dataset generated with AttrPrompt significantly outperforms its counterpart generated with SimPrompt. Furthermore, we demonstrate the superiority of AttrPrompt over SimPrompt in terms of data/budget efficiency and compatibility with different model sizes/various LLM-as-training-data-generator approaches. Notably, AttrPrompt achieves the performance of SimPrompt while utilizing only 5% of the querying cost of ChatGPT associated with SimPrompt. Lastly, we extend the LLM-as-training- data-generator paradigm to the more challenging multi-label classification tasks for the first time, and AttrPrompt outperforms SimPrompt across all evaluation metrics.
# 2 Related Work | 2306.15895#5 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16092 | 5 | The key contributions of this paper are as follows:
1. Effective Approach to Mitigate Hallucination: We propose an approach to address hallu- cination by enhancing the modelâs training process and incorporating four modules during inference: "consult," "reference", "self-suggestion" and "response." By integrating vertical models and knowledge bases through the reference module, we inject domain-speciï¬c knowledge into the model and leverage accurate information from the knowledge base, reducing the occurrence of hallucinations.
2. Legal Feature Word Extraction Model based on LLM: We train a model that extracts legal feature words from usersâ everyday language. This model identiï¬es words with legal signiï¬cance, enabling efï¬cient identiï¬cation and analysis of legal contexts within user input.
3. Legal Text Similarity Calculation Model based on BERT: We train a model to measure the similarity between usersâ everyday language and a dataset consisting of 930,000 relevant
2
legal case texts. This enables the creation of a vector database for efï¬cient retrieval of similar legal texts, facilitating further analysis and reference. | 2306.16092#5 | ChatLaw: Open-Source Legal Large Language Model with Integrated External Knowledge Bases | Large Language Models (LLMs) have shown the potential to revolutionize
natural language processing tasks in various domains, sparking great interest
in vertical-specific large models. However, unlike proprietary models such as
BloombergGPT and FinGPT, which have leveraged their unique data accumulations
to make strides in the finance domain, there hasn't not many similar large
language models in the Chinese legal domain to facilitate its digital
transformation.
In this paper, we propose an open-source legal large language model named
ChatLaw. Due to the importance of data quality, we carefully designed a legal
domain fine-tuning dataset. Additionally, to overcome the problem of model
hallucinations in legal data screening during reference data retrieval, we
introduce a method that combines vector database retrieval with keyword
retrieval to effectively reduce the inaccuracy of relying solely on vector
database retrieval. Furthermore, we propose a self-attention method to enhance
the ability of large models to overcome errors present in reference data,
further optimizing the issue of model hallucinations at the model level and
improving the problem-solving capabilities of large models. We also
open-sourced our model and part of the data at
https://github.com/PKU-YuanGroup/ChatLaw. | http://arxiv.org/pdf/2306.16092 | Jiaxi Cui, Zongjian Li, Yang Yan, Bohua Chen, Li Yuan | cs.CL | null | null | cs.CL | 20230628 | 20230628 | [
{
"id": "2302.13971"
}
] |
2306.16564 | 5 | Figure 1: Self-supervision framework to calibrate LLM output and automatically detect error.
There have been several prior works in programmatic weak supervision to combine multiple su- pervision sources, but most of them produce a single label per instance by performing a weighted sum of labels from different sources [39, 32]. These approaches, although proven to be successful in previous applications, have significant limitations when applied to identify LLM errors due to the weighting dilemma. As LLMs can be more robust compared to other weak supervisions like knowledge base and regular expression, if the weight on the LLM is low, the aggregated result will be noisy. Conversely if the weight on the LLM is high, the output will be overwhelmed by the LLM and an error would be difficult to identify. A new approach is needed.
In this paper, we formulate the problem as multi-objective optimization. Leveraging the abundant research in Pareto optimization [24], we propose a flexible framework that combines information from both the LLM response and supervision sources using Pareto optimal learning. The harmonizer network h(x) is optimized on LLM and weak sources simultaneously in Pareto optimal manner, thus overcoming the weighting dilemma. The key contributions of our study are as follows:
1. We are the first to propose adopting Pareto optimization in combining multiple supervision sources, an entirely new framework compared to previous weak supervision work. | 2306.16564#5 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 6 | # 2 Related Work
LLMs as Training Data Generators. With the remarkable success of large language models (LLMs), researchers have recently attempted to leverage them as training data generators. Such applications include generating tabular data [4], relation triplets [8], sentence pairs [46], instruction data [40, 50, 53, 47], etc.. Among these applications, we anchor on training data generation for topic classification in a zero-shot setting where no labeled data is available. In this direction, existing approaches typically use simple class-conditional prompts while focusing on mitigating low-quality issues after generation. Initial explorations in this domain include SuperGen [30] and ZeroGen [55], which use LLMs for text classification and noise robust learning techniques [35, 52] to handle data
2 | 2306.15895#6 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16092 | 6 | 2
legal case texts. This enables the creation of a vector database for efï¬cient retrieval of similar legal texts, facilitating further analysis and reference.
4. Construction of a Chinese Legal Exam Testing Dataset: We curate a dataset speciï¬cally designed for testing legal domain knowledge in Chinese. Additionally, we design an ELO arena scoring mechanism to compare the performance of different models in legal multiple- choice questions.
Furthermore, we observed that a single general-purpose legal LLM may not perform optimally across all tasks in this domain. Therefore, we trained different models for various scenarios, such as multiple-choice questions, keyword extraction, and question-answering. To handle the selection and deployment of these models, we employed a big LLM as a controller using the methodology provided by HuggingGPT [6]. This controller model dynamically determines which speciï¬c model to invoke based on each userâs request, ensuring the most suitable model is utilized for the given task.
# 2 Dataset
In constructing the dataset, we employed several approaches to ensure its comprehensiveness and diversity. The dataset composition methods are as follows:
Collection of a vast amount of original legal data: This includes gathering legal news, social media content, and discussions from legal industry forums. These sources provide a diverse range of real-world legal text, offering insights into various legal topics and discussions. | 2306.16092#6 | ChatLaw: Open-Source Legal Large Language Model with Integrated External Knowledge Bases | Large Language Models (LLMs) have shown the potential to revolutionize
natural language processing tasks in various domains, sparking great interest
in vertical-specific large models. However, unlike proprietary models such as
BloombergGPT and FinGPT, which have leveraged their unique data accumulations
to make strides in the finance domain, there hasn't not many similar large
language models in the Chinese legal domain to facilitate its digital
transformation.
In this paper, we propose an open-source legal large language model named
ChatLaw. Due to the importance of data quality, we carefully designed a legal
domain fine-tuning dataset. Additionally, to overcome the problem of model
hallucinations in legal data screening during reference data retrieval, we
introduce a method that combines vector database retrieval with keyword
retrieval to effectively reduce the inaccuracy of relying solely on vector
database retrieval. Furthermore, we propose a self-attention method to enhance
the ability of large models to overcome errors present in reference data,
further optimizing the issue of model hallucinations at the model level and
improving the problem-solving capabilities of large models. We also
open-sourced our model and part of the data at
https://github.com/PKU-YuanGroup/ChatLaw. | http://arxiv.org/pdf/2306.16092 | Jiaxi Cui, Zongjian Li, Yang Yan, Bohua Chen, Li Yuan | cs.CL | null | null | cs.CL | 20230628 | 20230628 | [
{
"id": "2302.13971"
}
] |
2306.16564 | 6 | 1. We are the first to propose adopting Pareto optimization in combining multiple supervision sources, an entirely new framework compared to previous weak supervision work.
2. The Pareto optimal learning assessed risk (POLAR) score from our framework is shown to be effective in estimating LLM error probability.
3. A dynamic prompting strategy designed to automatically improve high-risk instances is shown to outperform SOTA supervised model without any manually labeled training data.
2
# 2 Related Work
Early works in model calibration date back to the seminal work of Platt scaling [25], where a Logistic calibration model is fitted on top of the original model output. Various techniques have been developed afterwards for model calibration, including isotonic regression [38], temperature scaling [9], and Bayesian binning [21]. A contextual calibration method for LLMs was proposed by [42], which adjust the class balance by taking an ensemble of LLM queries with content-free input. Most of these methods rely on labeled calibration data, and there isnât a systematic self-supervised approach to our knowledge. | 2306.16564#6 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 7 | 2
Suppose you are a news writer. Please generate an affordable care act news in NYT following the requirements below: 1. Should focus on role of state governments; 2. Should be in length between 30 and 88 words; 3. The writing style of the news should be news analysis; 4. The location of the news is in Oceania. Which attribute dimensions do you consider vital @ | in determining the topic of a news article? =_ Subtopics, length, location, reader-group, style, time; ... S | | â_â_S What subtopics... @ | affordable care act? @ [iret tengths... = = (ees Opinion, Enroll Numbers, .. } âr AttrPrompt S 30-80 words, 180- 15@ words ] = Generated Text As New Zealand's state governments continue to implement the Affordable Care Act, focus has turned towards the success of Primary Health Organizations. The model has proven effective in providing comprehensive care and reducing costs for patients. However, challenges remain with coordination and equity among different regions. With ongoing progress, states will play a crucial role in shaping the future of healthcare in New Zealand. Lipp ACES Attribute Generation
Figure 1: The overall workflow of AttrPrompt. | 2306.15895#7 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16564 | 7 | As accurate calibration for LLM is challenging, heuristic methods have been proposed to detect hallucination or estimate the confidence level of the answer. [33] used self-consistency to infer the reliability of the answer. [19] proposed SelfCheckGPT as a black-box method to detect hallucination. The chain-of-thought (CoT) by [34] has also been used to indicate potential errors. These methods are less quantitative and are vulnerable that the estimated confidence is biased by the model itself. The quality of the results could be highly dependent on the prompting strategy, and there is no systematical way to quantify this.
Our work steams from the weak supervision domain that aggregates multiple supervision sources in a single label [39]. Following early works in distant supervision [11] and crowd-sourcing [29], data programming was proposed by [28, 26] as one of the milestones. Numerous methods have been introduced into the field [7, 31, 32, 16], with MeTaL (currently known as Snorkel) [27] as the most popular method as of today. Most of the existing works weight the sources across all examples, leading to the weighting dilemma. We address this problem with Pareto optimization to be adaptive to all sources simultaneously and show that this approach offers better LLM calibration ability.
# 3 Methodology
# 3.1 Problem setup | 2306.16564#7 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 8 | Figure 1: The overall workflow of AttrPrompt.
quality issues. SunGen [13] reweights the generated data during training with learned data quality weight, and ProGen [56] selects highly influential generated data via model feedback. In this work, we instead explore attributed prompts to reduce the issue of low informativeness and redundancy, which can be readily incorporated into the existing systems mentioned above. Notably, Chen et al. [7] also explore prompts to advance the data generation process, yet it adopts soft prompts and requires a white-box LLM and seed examples to tune them. In contrast, our method is applicable to black-box LLMs and even LLM APIs (e.g., ChatGPT) and does not rely on any labeled examples. A recent work WANLI [26] also considers human-AI collaboration for creating more challenging training data, but requires an initial dataset and a strong task model. Instead, we aim to generate training data without any initial dataset or a pre-existing task model, which allows us to effectively handle resource-limited scenarios.
Attribute-aware Text Generation. There are also several existing works [27, 44, 57] that incorpo- rate attributes for controlled text generation, but these are concentrated on very different tasks like style transfer. Typically, these methods necessitate explicit provision of attributes. Differently, we introduce a semi-automated strategy that allows LLMs to propose attribute values autonomously. | 2306.15895#8 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16564 | 8 | # 3 Methodology
# 3.1 Problem setup
Denote the LLM as a function LLM(x; prompt) parameterized by user-defined prompt for the specific task. The LLM is required to give a correct response to the given input x â X . As evaluating LLM output in the general free text form is still challenging [13], in this study we restrict ourselves to tasks where the desired output space Y is finite. Fixing the prompt for the specific task and map LLM responses into the target space with operator P (e.g. taking the first token), define
Î(x) := P(LLM(x; prompt)) â Y ⪠{0}, (1)
where 0 refers to the case where the LLM states âunsureâ when it is not confident about the answer. The error or hallucination in this setting is defined as: the model confidently states and answer (Î(x) ̸= 0) that is actually wrong (Î(x) ̸= y). The goal of LLM calibration and error detection is to develop an estimator for
P(Î(x) ̸= y|Î(x) ̸= 0).
(2)
Our self-supervision framework utilizes the following ingredients: | 2306.16564#8 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 9 | Discrete Prompt Optimization. Several works attempt to optimize discrete prompts for querying LLMs with large language models [43, 64, 34]. More related to us, [33, 20] reframe prompts by decomposing a complex task instruction into multiple simple ones. However, these approaches mainly focus on the inference stage for directly predicting the answer and may rely on additional labeled examples for validation. Our focus is on an orthogonal setting, optimizing prompts for LLMs with attributes to diversify the generated training data. This approach improves the modelâs overall performance without the need for additional labeled examples.
# 3 Large Language Model as Attributed Training Data Generator
In this section, we present the design of our proposed method, AttrPrompt. This technique employs class-conditional attributes as an enhancement to the query prompts employed in Large Language Models (LLMs). These augmented prompts enable more effective data generation for training purposes. A detailed workflow of the AttrPrompt can be referenced in Figure 1.
# 3.1 Datasets
While previous research has primarily focused on binary classification datasets [55, 30, 56] or datasets containing a maximum of 14 classes [13, 59], the performance of LLM as a data generator for topic
3
classification with high cardinality (i.e., many topic classes) remains unclear. Thus, we consider the following datasets from various domains with the number of topics ranging from 23 to 504: | 2306.15895#9 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16564 | 9 | P(Î(x) ̸= y|Î(x) ̸= 0).
(2)
Our self-supervision framework utilizes the following ingredients:
Unlabelled input examples x1, · · · , xn â X . ⢠LLM function Î(x) := P(LLM(x; prompt)) â Y ⪠{0} ⢠m Supervision Sunctions1 that can be triggered to output heuristic labels:
λj(x) â Y ⪠{0}, j = 1, · · · , m. (3)
The âtriggeringâ of such supervision functions can be: containing certain keywords, satisfying certain regular expressions, matching certain knowledge base record, etc. Again, 0 refers to an abstain output where the supervision function is not triggered. The supervision function λjâs are typically required to be better than a random guess [28, 35], and we will show in Section 3.2 that even slightly better than random guess signals can be helpful.
1These are also referred to as labeling functions in some literature [39]. However, we use the name supervision function for more generalized settings.
3
# 3.2 How to Calibrate? | 2306.16564#9 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 10 | 3
classification with high cardinality (i.e., many topic classes) remains unclear. Thus, we consider the following datasets from various domains with the number of topics ranging from 23 to 504:
⢠NYT [31]: The NYT dataset comprises news articles that were authored and published by The New York Times. These articles are categorized into 26 fine-grained categories.
Amazon [3]: The Amazon dataset contains customer reviews on products from Amazonâs online
store. It covers products from 23 different categories.
⢠Reddit [15]: The Reddit dataset consists of a vast collection of user-generated content from the popular social media platform Reddit. It encompasses a wide range of topics, discussions, and interactions among users across numerous communities.
⢠StackExchange [15]: The StackExchange dataset is a rich collection of structured data encom- passing various online communities and knowledge-sharing platforms. It contains a vast array of questions, answers, comments, tags, and user interactions about specific technical problems.
We summarize the statistics of used dataset in Table 2, from which we can see that the involved datasets not only have high cardinality but also come with high imbalance ratio, i.e., the ratio of the sample size of the majority class to that of the minority class, which reflects the long-tail class issue in real applications [1].
Table 2: Statistics of datasets. | 2306.15895#10 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16564 | 10 | 1These are also referred to as labeling functions in some literature [39]. However, we use the name supervision function for more generalized settings.
3
# 3.2 How to Calibrate?
In Section 1 we described two intuitions behind how calibration can be achieved via self-supervision. Here we formally state the corresponding propositions, with detailed proof in Appendix A. Proposition 1. Suppose the ground truth label y â Y is unseen, and the imperfect supervision model Î(x) has error rate α with miss-classification evenly distributed:
P(Î(x) = y) = 1 â α, P(Î(x) = yâ²) = α |Y| â 1 , yⲠ̸= y. (4)
Then fitting a model h(x) to Î(x) is equivalent to training on ground truth labels with label smoothing as defined in Section 1.1 by [20]. | 2306.16564#10 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.16564 | 11 | Then fitting a model h(x) to Î(x) is equivalent to training on ground truth labels with label smoothing as defined in Section 1.1 by [20].
The proof is straightforward using conditional expectation. Here we refer to the work by [20], which empirically showed that label smoothing improves or at least doesnât hurt model calibration. Proposition 1 shows a path towards calibrating LLM via distilling a small network on the specific task. However, this along can easily go into the pitfall where the distilled model is entirely bias by the LLM and never able to signal its error. This shows the needs for external signals that are independent to the LLM itself. While human provided supervision is undesired, our next claim shows that even the weakest noisy signals can always help the strongest LLM, relieving manual effort in a self-supervised form. Proposition 2. Consider target space Y = {â1, 1} for simplicity. Suppose the LLM is arbitrarily accurate with P(Î(x) = y) = p < 1, and the weak independent signal ensemble is modeled by w(x) â¼ N (y · µ, Ï2) with µ > 0, then there always exist a function Ï(Î, w) s.t.
P(Ï(Î(x), w(x)) = y) > p. | 2306.16564#11 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 12 | # Interactive Attribute Generation
Different from the existing works [30, 55, 13] that directly use the simple class-conditional prompts for querying LLMs, our initial step involves identifying various types of data attributes (or metadata) that can be manipulated to generate attributed data samples. To facilitate this process, we employ ChatGPT to help establish both attribute dimensions and attribute values. Specifically, we begin by engaging ChatGPT in generating essential attribute dimensions. This is achieved by posing questions such as âWhich attribute dimensions do you consider vital in determining the topic of a news article?â for the NYT dataset, resulting in responses like âsubtopics, length, location, reader group, style, timeâ. Then, we adopt the human-ai collaboration scheme [26, 54, 61] to interactively select the attribute dimensions of the highest quality that best suit the dataset. Similarly, we prompt ChatGPT (the prompt format is listed in Appendix E) to suggest potential attribute values within each attribute dimension and choose high-quality candidates. | 2306.15895#12 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16564 | 12 | P(Ï(Î(x), w(x)) = y) > p.
We prove by constructing a parameterized function class in Appendix A. The implication of this proposition is: for any strong LLM with accuracy approaching 100%, and any weak supervision signal ensemble (we approximate using central limit theorem (CLT) into a Guaussian distribution) that just need to be slightly better than random guess, we can always combine them to further enhance the LLM performance.
# 3.3 Pareto optimal learning
Proposition 1 shows the calibration potential of fitting a small network to an imperfect source such as LLM outputs. Proposition 2 reveals that even the weakest supervision sources such as reg-ex or knowledge base can be used to assist LLM in someway. The limitations of the assumptions are obvious, though. In practice the weak sources are usually dependent, making the analysis of their behavior difficult. The number of such supervision sources is another reason why CLT wonât hold.
Therefore, we propose our novel approach of fitting a single network to all imperfect supervision sources simultaneously with Pareto optimization. Mathematically, we define it as the harmonizer model h : X â Y and solve the following Pareto optimal learning problem: {E[âj(h(x), λj(x))]}m E[â0(h(x), Î(x))], | 2306.16564#12 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 13 | Attribute dimensions and values. There are two types of attribute dimensions: class-independent attributes and class-dependent attributes. Class-independent attributes, such as âlengthâ, remain unchanged across different classes, while class-dependent attributes, like âsubtopicâ, have varying attribute values for each class. We list attribute dimensions and values for all datasets in Table 3. These data attributes provide a human-manipulatable interface for generating attributed data. In this study, we explore the potential of leveraging attributes to enhance the data generation process, while leaving the search for the optimal data attributes for a specific task to future work.
Class-dependent attribute value filtering. When dealing with class-dependent attributes, it is crucial to ensure that their attribute values are specifically associated with the corresponding class to avoid ambiguity and potential connections to multiple classes. For example, in the case of the âeconomyâ class in the NYT dataset, a candidate attribute value generated by ChatGPT for the âsubtopicâ could be âeffect of trade tariffs on manufacturing companiesâ, which is also relevant to the âinternational businessâ class in the NYT. This overlap may introduce ambiguity in the generated data. To address this issue, we employ a filtering process called Class-Dependent Attribute Value Filtering (CAF). First, we query ChatGPT for the top-5 similar classes and then check with ChatGPT whether | 2306.15895#13 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16564 | 13 | min hâH j=1, (5)
where all the expectations are taken for x â¼ X .
The challenge is that the multiple objectives in this optimization problem can potentially conflict, making the problem ill-posed. Following multi-objective learning theory [12], we resort to finding a harmonizer hâ â H that is Pareto optimal [24] as follows. Definition 1 (Pareto optimal harmonizer). hâ â H is a Pareto optimal harmonizer to Î and λ1, · · · , λm, if there does not exist any h â H that Pareto dominates hâ in Problem 5. Mathe- matically, if we denote λ0 := Î, hâ needs to satisfies the following:
ah eH,
# on
vm, Ele;(h(2),j()) Vj =0,1,-- Ele; Beg E[é, (h(x), dj(x))] tet Elé;( 8 < |; < J. 8 h* (2), d4(2)) h(x), X4(2))
The Pareto optimization framework deals with dependency between supervision sources nicely, e.g. the Pareto optimality of a harmonizer wonât be affected by arbitrarily duplicating supervision
4 | 2306.16564#13 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.16564 | 14 | The Pareto optimization framework deals with dependency between supervision sources nicely, e.g. the Pareto optimality of a harmonizer wonât be affected by arbitrarily duplicating supervision
4
functions. However, finding Pareto optimal solutions is a challenging goal in the multi-objective optimization literature. One of the most popular approach is by scalarizing the multiple objectives into a single one. For our specific problem, we propose to approximate the problem by minimize the following Pareto loss scalarizer G : Rm+1
min hâH Exâ¼X [G (â0(h(x), Î(x)), â1(h(x), λ1(x)), · · · , âm(h(x), λm(x)))]. (6)
We require G : Rm+1 Definition 2 (Pareto scalarizer). G(â0, â1, · · · , âm) is a Pareto scalarizer, if it satisfies:
+ â R+ to satisfy the following conditions. | 2306.16564#14 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 15 | 4
Table 3: Attribute dimensions and values. Attributes with an asterisk* are class-dependent attributes.
Dataset # configurations / class Attribute dimension Attribute value Subtopic* Appendix G.1.1 NYT 600 Location Asia, North America, South America, Africa, Oceania, Europe Writing Style Investigative journalism, Op-Eds, Feature writing, News analysis, Profiles and interviews Length short (30-80 words); long (100-150 words) Product Brands* Appendix G.2.1 Product Names* Appendix G.2.2 Amazon 1000 Usage Experience Worst, Bad, Average, Good, Excellent Writing Style Detailed Review; Comparative Review; Pros and Cons Review; Recommendation Review Length short (30-80 words); long (100-150 words) Resources* Appendix G.3.1 Reddit 500 Experience* Appendix G.3.2 Writing Style Informative/Educational; Entertaining/Funny; Discussion; Storytelling; Help/Advice Length short (30-80 words); long (100-150 words) Scenario* Appendix G.4.1 StackExchange 400 Technical Depth Beginner; Intermediate; Advanced; Expert Writing Style Specific; Comparative; Problem-Solution; Troubleshooting; Tutorial Length short (30-80 words); long (100-150 words) | 2306.15895#15 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16564 | 15 | + â R+ to satisfy the following conditions.
⢠G(â0, · · · , ââ² j, · · · , âm) < G(â0, · · · , âj, · · · , âm) if ââ² j < âj, for âj = 0, 1, · · · , m;
G : Rm+1
+ â R+ is convex.
* G: Rpt > R, is convex.
In this study, we explored four different types of scalarizers, namely:
Linear scalarizer: G(lo, ¢1,--+> » 4m) = Yjeo
Quadratic scalarizer: G(â0, â1, · · · , âm) =
Yjeo 6; = \!Gh.- L (x7 2)
# j=0 âj
= â¥âââ¥2 1.
⢠Euclidean norm scalarizer: G(â0, â1, · · · , âm) = j = â¥âââ¥2.
j=0 â2 j=0 âj = â¥âââ¥â. | 2306.16564#15 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 16 | each class-dependent attribute value is related to these top-5 similar classes. Then, if the answer is positive which indicates a potential ambiguity, we remove that attribute value for the specific class.
# 3.3 Data generation and model training
Given the data attributes, one could prompt LLMs to generate data samples with diverse attribute configurations. For example, an attribute configuration for the âfederal budgetâ class of the NYT dataset could be {âsubtopicâ=âdefense spendingâ, âlengthâ=âshort:min-words=30, max-words=80â, âstyleâ=âinvestigative journalismâ, âlocationâ=âNorth Americaâ}. In Table 3, we list the number of configurations per class, and one can further expand the number of configurations by adding more attribute dimensions and values. To generate attributed data samples, we prompt ChatGPT with random configurations. In particular, each time we generate a random configuration, complete a prompt template (see Table 1) with the generated configuration, and query ChatGPT with the completed prompt to collect generated data samples. | 2306.15895#16 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16564 | 16 | j=0 â2 j=0 âj = â¥âââ¥â.
Chebyshev scalarizer: G(â0, â1, · · · , âm) = maxm
Note that the nonlinear function of ââ is important in shaping the optimization in Eq. 6 different through Jensenâs inequality, e.g. the quadratic scalarizer emphasize more on the challenging examples. While the first three scalarizers are Pareto scalarizer, the Chebyshev scalarizer does not satisfies the above definition, which serves as a comparison in our experiment.
For any Pareto scalarizer, the proposed approach in Eq. 6 is guaranteed by the following theorem. + â R+ is a Pareto scalarizer as in Definition 2. Solving the problem Theorem 1. Suppose G : Rm+1 in Equation 6 approximate a Pareto optimal harmonizer by upperbounding an objective whose optimal solution is Pareto optimal as in Definition 1. | 2306.16564#16 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 17 | Throughout the experiments, we compare our method (AttrPrompt) against simple class-conditional prompt (SimPrompt, [55]) and the original training set of each dataset (Gold). For a fair comparison, we set the number of generated data the same as Gold for both AttrPrompt and SimPrompt. In principle, the generated dataset can be combined with any classifier (Sec. 6.4) and training techniques (Sec. 6.5); if not otherwise specified, we choose to fine-tune BERT-base-uncased [11] as the backbone and use the standard cross-entropy loss by default. For hyperparameter selection, we adhere to the recommendation in [41, 30] for SimPrompt and AttrPrompt, and do not use the validation set for model selection. Detailed hyperparameter configurations can be found in Appendix B. | 2306.15895#17 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16564 | 17 | We refer to Appendix A for a detailed proof of the theorem. The outline of the proof is two-fold. In the first step, Jensenâs inequality is applied to upperbound an intermediate objective. We then use the definitions of Pareto optimal harmonizer and Pareto scalarizer to prove by contradiction.
Based on the theoretical analysis above, we propose to find the harmonizer model h(x) for LLM calibration by solving problem 6 using stochastic gradient-based algorithms such as Adam [14]. In order for the h(x) to deliver high-quality calibration, we recommend using cross-entropy loss for â0, â1, · · · , âm. Once an optimal solution hâ â H is found, for any input x and LLM response Î(x), the Pareto optimal learning assessed risk (POLAR) score is defined as
ζ(x, Î(x); hâ) = PY â¼hâ(x)(Î(x) ̸= Y |Î(x) ̸= 0),
(7)
where PY â¼hâ(x) is the probability distribution of Y estimated by hâ(x). The whole process is summarized in Algorithm 1 below.
# 3.4 LLM error correction with POLAR-assisted dynamic prompting | 2306.16564#17 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 18 | A glimpse of the generated data. Here, we present examples of data generated by AttrPrompt and SimPrompt, and real data from the Gold set for the âfederal budgetâ class in the NYT dataset (Table 4). It is evident that the data generated by ChatGPT exhibit high quality. Particularly, when comparing AttrPrompt to SimPrompt, we observe that AttrPrompt renders more diverse samples. This is because SimPrompt tends to generate news focused on the U.S., while AttrPrompt has the capability to generate news from various locations around the world.
# 4 Diversity Analysis of the Generated Data
Quantitative study of diversity. To quantify the diversity of the generated training data of Sim- Prompt and AttrPrompt, we first show the vocabulary size of the generated dataset and the Gold dataset, which is a natural way to check the lexical diversity of datasets (Table 5). From the table, we can see that AttrPrompt has higher lexical diversity than SimPrompt in terms of both vocabulary size of the whole dataset (All in the table) and the averaged vocabulary size across classes (Class Avg. in the table). Yet, both have much smaller vocabulary sizes than the Gold, indicating there is still room for improvement of the ChatGPTâs lexical diversity.
5
Table 4: Data examples of different datasets: the âfederal budgetâ class of the NYT dataset. | 2306.15895#18 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16564 | 18 | # 3.4 LLM error correction with POLAR-assisted dynamic prompting
Being able to identify LLM responses with high risk of error opens up the opportunity to automatically improve its responses. Towards this goal we propose two dynamic prompting strategies, both assisted by the POLAR score, to illustrate the potential use of the POLAR score to correct LLM errors. The two strategies are as follows:
Dynamic self-examination In this strategy, whenever the POLAR score ζ(x, Î(x); hâ) > δ for threshold δ, we ask the LLM to reflect on its answer. See appendix for detailed prompt.
5
Algorithm 1 POLAR for LLM responses 1: Input: LLM response Πas in Equation 1, supervision functions λ1, · · · , λm as in Equation 3,
unlabeled training examples x1:n. Initialize harmonizer h â H.
2: for i = 1 to n do 3: 4: 5: 6: 7: 8: end for 9: Output: Harmonizer hâ. For any example x and LLM response Î(x), the POLAR score of the
lossLLM = (Î(xi) ̸= 0) â â(h(xi), Î(xi)) for j = 1 to m do | 2306.16564#18 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 19 | Example 1. The emergency manager who was sent to reverse the fortunes of this financially troubled city asked some of its creditors on Friday to accept pennies on the dollar as he laid out his plan for tackling Detroitâs staggering debt, kick starting negotiations that could determine whether the city is headed to bankruptcy court... 2. Saying that its debt could reach 45 billion by 2017 if Congress does not act, the Postal Service on Wednesday called on lawmakers to give it the flexibility to change its business model to keep itself solvent. 3. Governmental Affairs Committee, Patrick R. Donahoe, the postmaster general, asked Congress to give the Postal Service permission to run its own health plan for employees and retirees, modify a Congressional mandate that requires the agency to pay... 1. Washington D.C. The United States government has recently passed its federal budget for the next fiscal year, setting aside a whopping 4.8 trillion for spending. This is the largest budget in the countryâs history and reflects the governmentâs commitment to improving the countryâs economy and infrastructural development. 2. WASHINGTON D.C. The Trump administration released its proposed budget for the federal government on Monday, calling | 2306.15895#19 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16564 | 19 | lossLLM = (Î(xi) ̸= 0) â â(h(xi), Î(xi)) for j = 1 to m do
lossj = (λj(xi) ̸= 0) â â(h(xi), λj(xi))
end for Update h with SGD iteration of minh G(lossLLM, loss1, · · · , lossm).
response is estimated as according to Equation 7.
Dynamic self-supervision In this strategy, we utilize the supervision functions as sources to help the LLM reflect on its initial response (e.g. stating the reg-ex of knowledge base findings).
Algorithm 2 outlines the POLAR-assisted dynamic prompting strategy. We provide detailed prompt- ing design description in the Appendix D.
Algorithm 2 POLAR-assisted dynamic self-supervision for LLM 1: Input: Example x, LLM response Î(x), supervision functions λ1, · · · , λm, harmonizer hâ. 2: if ζ(x, Î(x); hâ) > δ then 3: 4: 5: 6: 7: 8: 9: 10: 11: else 12: 13: end if
Initialize Reflection Prompt. for j = 1 to m do
if λj(x) ̸= 0 then | 2306.16564#19 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 20 | the countryâs economy and infrastructural development. 2. WASHINGTON D.C. The Trump administration released its proposed budget for the federal government on Monday, calling for cuts to programs that provide assistance to low income Americans while increasing military spending. The proposed budget includes a 10 percent increase in defense spending, bringing the total military budget to 574 billion... 3. Washington D.C. The U.S. federal government has presented its budget for the fiscal year 2022, which includes a total of 6 trillion in spending and initiatives aimed at making the country more competitive and sustainable. The budget plan includes significant investments in infrastructure, healthcare, education, and renewable energy. | 2306.15895#20 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16564 | 20 | Initialize Reflection Prompt. for j = 1 to m do
if λj(x) ̸= 0 then
# Add evidence from supervision function j to the Reflection Prompt.
# end if
end for Respond to the LLM with the Reflection Prompt and get new response Îâ²(x). return Îâ²(x)
# return Î(x)
# 4 Experiments
Dataset Since one of the essential ingredients in our framework is the supervision functions, for reproducibility we only experiment on tasks that have publicly available supervision functions, which are mostly developed in the weak supervision literature. As such, we refer to the benchmark datasets collected by [40], and experiment on four different NLP tasks, namely CDR [17], ChemProt [15], SemEval [10], and SMS [1]. The labels on the training sets are removed. Gold labels are available on the test sets for evaluation. We select the datasets to broadly cover the following aspects:
⢠Domain: General domain (SemEval, SMS), biomedical domain (CDR, ChemProt).
⢠Task: Relation extraction (CDR, ChemProt, SemEval), classification (SMS). | 2306.16564#20 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 21 | # Method
# Gold
# SimPrompt
# AttrPrompt
Table 5: Comparison of the vocabulary size of different datasets.
Method All NYT Class Avg. All Amazon Class Avg. All Reddit Class Avg. StackExchange All Class Avg. Gold SimPrompt AttrPrompt 70.8k 20.6k 21.4k 11.3k 3.13k 3.50k 44.7k 11.6k 14.0k 6.64k 2.50k 2.76k 50.8k 19.9k 25.4k 4.62k 3.06k 3.64k 52.3k 13.3k 17.8k 3.60k 2.20k 2.93k
Table 6: Comparison of two quantitative metrics on diversity: the average pairwise sample similarity (APS) and inter-sample N-gram frequency (INGF) of different datasets. For APS, the lower stands for better diversity. For INGF, the higher stands for better diversity. | 2306.15895#21 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16564 | 21 | ⢠Task: Relation extraction (CDR, ChemProt, SemEval), classification (SMS).
⢠Difficulty: The problem set includes tasks that are easily solvable by advanced LLMs like GPT-4 such as SMS (99% F-1), as well as tasks that still challenging for GPT-4, e.g. CDR (74% F-1), ChemProt (42% F-1), and SemEval (67% F-1).
Prompt design In order to leverage the maximal capability of the LLMs, we carefully design the prompt for each problem to clarify the problem setting, knowledge background, input and output structure, and instruction to state "unsure". See appendix for detailed prompt information.
Supervision functions The supervision functions for the datasets are based on simple rules provided by human experts [26, 37, 43, 2]. The rules in the supervision functions include:
6
⢠Keywords and regular expression pattern checking.
⢠Knowledge base (e.g. the Comparative Toxicogenomics Database [5])
⢠Hierarchical combination of the above. | 2306.16564#21 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 22 | Method Inter-Class APS NYT Intra-Class APS APS INGF Inter-Class APS Amazon Intra-Class APS APS INGF Gold SimPrompt AttrPrompt 0.098 0.101 0.159 0.358 0.568 0.474 Reddit 0.122 0.135 0.182 7618.1 5277.2 6688.6 0.101 0.207 0.225 0.251 0.620 0.483 StackExchange 0.114 0.241 0.246 4992.1 2266.5 2605.5 Inter-Class APS Intra-Class APS APS INGF Inter-Class APS Intra-Class APS APS INGF Gold SimPrompt AttrPrompt 0.044 0.173 0.106 0.261 0.818 0.474 0.054 0.201 0.122 9079.6 2697.8 3994.5 0.056 0.282 0.105 0.196 0.804 0.375 0.063 0.302 0.114 5492.4 2259.8 2464.3 (a) NYT (b) Amazon (c) Reddit (d) StackExchange | 2306.15895#22 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16564 | 22 | 6
⢠Keywords and regular expression pattern checking.
⢠Knowledge base (e.g. the Comparative Toxicogenomics Database [5])
⢠Hierarchical combination of the above.
Harmonizer training We train harmonizer h(x) by Pareto optimal learning following Equation 6 and Algorithm 1. Our main experiments used BERT [6] (PubMedBERT [8] for biomedical datasets CDR and ChemProt) as the harmonizer model, and quadratic scalarizer. Other choices are discussed in Section 5. Training details are described in Appendix C.
# 4.1 POLAR calibration of LLM error
In this section we present our implementation results of Algorithm 1. The fitted harmonizer is applied to unseen test set to give a POLAR score for each LLM response. Gold labels on the test set are used to estimate the actual error rate in Equation 2, and to evaluate how well the proposed POLAR score is calibrated and correlated with the actual error rate. | 2306.16564#22 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 23 | Figure 2: The distribution of cosine similarity of text pairs sampled from the same class.
We then visualize the diversity of datasets via the distribution of cosine similarity of same-class text pairs (Figure 2), where the cosine similarity is calculated based on the embedding of Sentence- BERT [42], as well as including two additional metrics, namely average pairwise sample similarity (APS) and inter-sample N-gram Frequency (INGF) [32], as shown in table 6. We can see that the Gold dataset has the lowest cosine similarity, indicating that real data has the largest diversity. In contrast, the similarity between samples generated by SimPrompt is high Compared to SimPrompt, dataset generated with AttrPrompt exhibits lower cosine similarity and the distribution is close to that of the Gold, which shows AttrPrompt could render more diverse data. Apart from the above automatic evaluation processes, we also conduct human study in Appendix D.1 to manually evaluate the quality of the generated training data.
6
1.0 | 2306.15895#23 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16564 | 23 | Fig. 2 shows the POLAR score calibration results for GPT-4 on the CDR chemical-disease relation extraction task. The calibration curve in Figure 2a shows that the POLAR score is a good estimator of the true probability that the LLM is making a mistake, as defined in Equation 2. Figure 2b shows that the POLAR score is highly correlated with the error rate. Lastly, Figure 2c shows that for all three different LLMs, the responses with the highest POLAR score are the most likely to generate hallucination or errors, and the highest POLAR score indicate almost 100% error rate.
POLAR Calibration Plot GPT-4_CDR ECE: 0.043 g S 3 § 2g E a O1 02 03 04 05 06 0.7 08 POLAR risk score
Lo POLAR Score v.s. Error Rate CDR GPT-4 ° @ Observed error rate . â Least squares ne . 08 3g R-squared: 0.890 s 3 0.6 § 204 5 Sor 0.0 . 02 04 0.6 08 Average POLAR risk score
(a) Error calibration curve
(b) Correlation with error rate
# Error rate among top risky LLM responses
# CDR | 2306.16564#23 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 24 | 6
1.0
The importance of the attribute diversity. We investigate the impact of attribute diversity within AttrPrompt on model performance. Specifically, we conduct experiments by fixing one attribute dimension to a candidate value while keeping other attribute values random. Then, we generate 50 data per class using such a one-fixed-others-random configuration to compose a dataset and evaluate the performance of the trained model. Note that for class-dependent attributes, we sample one value for each class and repeat it 5 times, since it is computationally prohibitive to enumerate all combinations of class-dependent attribute values. In Figure 3, each bar stands for a specific one-fixed- others-random configuration; compared to random configurations, most of one-fixed-others-random configurations result in a performance drop. To further reduce the attribute diversity, we pick the attribute value with the best performance for each attribute dimension (the highest bar within each attribute dimension) and compose them to a single configuration (the dashed blue line). We can see that the dashed blue line is significantly worse than the random configuration, even though it is composed of individually best attribute values. This illustrates the importance and necessity of designing prompts with diverse attributes.
(a) Amazon
(b) NYT Figure 3: Bar charts of model performance with different attribute configurations of AttrPrompt.
# 5 Bias Analysis of the Generated Data | 2306.15895#24 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16092 | 24 | By incorporating data from these diverse sources and construction methods, our dataset encompasses a wide range of legal contexts, ensuring that the developed model is capable of effectively understanding and addressing various legal scenarios.
Once these data components are collected, the dataset undergoes a rigorous cleaning process. This involves ï¬ltering out short and incoherent responses, ensuring that only high-quality and meaningful text is included. Additionally, to enhance the dataset, we leverage the ChatGPT API for assisted construction, allowing us to generate supplementary data based on the existing dataset.
# 3 Training Process
The Keyword LLM is a language model that extracts keywords from abstract consulting problems raised by users. The Law LLM, on the other hand, extracts legal terminology that may be involved in user consultations. The ChatLaw LLM is the ultimate language model that outputs responses to users. It refers to relevant legal clauses and utilizes its own summarization and Q&A function to generate advice for users in their consultations.
# 3.1 ChatLaw LLM
To train ChatLAW, we ï¬ne-tuned it on the basis of Ziya-LLaMA-13B [11] using Low-Rank Adap- tation (LoRA) [3]. Additionally, we introduced the self-suggestion role to further alleviate model hallucination issues. The training process was carried out on multiple A100 GPUs and the training costs were further reduced with the help of deepspeed. | 2306.16092#24 | ChatLaw: Open-Source Legal Large Language Model with Integrated External Knowledge Bases | Large Language Models (LLMs) have shown the potential to revolutionize
natural language processing tasks in various domains, sparking great interest
in vertical-specific large models. However, unlike proprietary models such as
BloombergGPT and FinGPT, which have leveraged their unique data accumulations
to make strides in the finance domain, there hasn't not many similar large
language models in the Chinese legal domain to facilitate its digital
transformation.
In this paper, we propose an open-source legal large language model named
ChatLaw. Due to the importance of data quality, we carefully designed a legal
domain fine-tuning dataset. Additionally, to overcome the problem of model
hallucinations in legal data screening during reference data retrieval, we
introduce a method that combines vector database retrieval with keyword
retrieval to effectively reduce the inaccuracy of relying solely on vector
database retrieval. Furthermore, we propose a self-attention method to enhance
the ability of large models to overcome errors present in reference data,
further optimizing the issue of model hallucinations at the model level and
improving the problem-solving capabilities of large models. We also
open-sourced our model and part of the data at
https://github.com/PKU-YuanGroup/ChatLaw. | http://arxiv.org/pdf/2306.16092 | Jiaxi Cui, Zongjian Li, Yang Yan, Bohua Chen, Li Yuan | cs.CL | null | null | cs.CL | 20230628 | 20230628 | [
{
"id": "2302.13971"
}
] |
2306.16564 | 24 | (a) Error calibration curve
(b) Correlation with error rate
# Error rate among top risky LLM responses
# CDR
â GPT-4 â GPT-3.5-turbo â Text-davinci-003 Response error rate ° 20 40 60 80 100 Percentile of Top Risky Responses (%)
(c) LLM error detection
Figure 2: LLM error calibration and hallucination detection using POLAR score. Figure 2a shows the error rate on the test set for LLM responses ranked into ten POLAR score bins. The expected calibration error (ECE) is the weighted average of the absolute deviation of POLAR score from the actual error rate in each bin. Figure 2b ranks the LLM responses into POLAR score bins each with 100 examples, and plot the average PLOAR score and error rate for each bin. The R2 is calculated on the scatter plot. Figure 2c shows the LLM error rate among the top POLAR score examples.
7 | 2306.16564#24 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 25 | (a) Amazon
(b) NYT Figure 3: Bar charts of model performance with different attribute configurations of AttrPrompt.
# 5 Bias Analysis of the Generated Data
In this section, we study the attribute bias in both real and generated datasets. In particular, we pick the âlocationâ attribute of the NYT data as a case study5. While existing works using LLMs as data generators usually overlook the bias embedded in the generated data, we hope that this preliminary analysis could raise the attention of the community to the attribute bias behind the generated data of LLMs such as ChatGPT.
We manually annotate the location for 100 samples from each of the Gold, SimPrompt, and AttrPrompt dataset. Note that we include âunkownâ as an option in manual annotation to absorb text without clear location specifications. To visualize the distribution of annotated locations in datasets, we plot the pie charts in Figure 4. From the visualizations, one can see that both the Gold and SimPrompt dataset are largely biased towards âNorth Americaâ, while the AttrPrompt datasets renders a relatively balanced âlocationâ distribution.
Gold SimPrompt AttrPrompt Europe = Mam Asia Africa lm North America lim South America l@mâ¢l Oceania Unknown
Figure 4: Pie charts of the distributions of the âlocationâ attribute for the NYT dataset. | 2306.15895#25 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16092 | 25 | # 3.2 Keyword LLM
Creating ChatLaw product by combining vertical-speciï¬c LLM with a knowledge base, it is crucial to retrieve relevant information from the knowledge base based on user queries. We initially tried traditional software development methods such as MySQL and Elasticsearch for retrieval, but the results were unsatisfactory. Therefore, we attempted to use a pre-trained BERT model for embedding,
4
Algorithm 1 Legal retrieval based on Large Langu Model keyword extraction
1: Initialize the BERT model for embedding and keyword extraction model. 2: Initialize the legal database as L, where li â L and i represents the i-th law. Let M be the number
of laws in the legal database.
3: Initialize the legal scores as S, where si â S represents the score corresponding to the i-th law, all initialized to 0. The number of elements in S is also M .
4: Extracting keywords from user queries using a keyword extraction model, and then inputting each keyword into a BERT model to obtain a collection of K keyword vectors, where ki represents the vector for the ith keyword, with a total of N keywords. We obtain s by inputting the userâs question into BERT.
5: Initialize α for assigning weight to s. 6: for i to N do 7: 8: 9: 10: 11: end for 12: return T opK(S) | 2306.16092#25 | ChatLaw: Open-Source Legal Large Language Model with Integrated External Knowledge Bases | Large Language Models (LLMs) have shown the potential to revolutionize
natural language processing tasks in various domains, sparking great interest
in vertical-specific large models. However, unlike proprietary models such as
BloombergGPT and FinGPT, which have leveraged their unique data accumulations
to make strides in the finance domain, there hasn't not many similar large
language models in the Chinese legal domain to facilitate its digital
transformation.
In this paper, we propose an open-source legal large language model named
ChatLaw. Due to the importance of data quality, we carefully designed a legal
domain fine-tuning dataset. Additionally, to overcome the problem of model
hallucinations in legal data screening during reference data retrieval, we
introduce a method that combines vector database retrieval with keyword
retrieval to effectively reduce the inaccuracy of relying solely on vector
database retrieval. Furthermore, we propose a self-attention method to enhance
the ability of large models to overcome errors present in reference data,
further optimizing the issue of model hallucinations at the model level and
improving the problem-solving capabilities of large models. We also
open-sourced our model and part of the data at
https://github.com/PKU-YuanGroup/ChatLaw. | http://arxiv.org/pdf/2306.16092 | Jiaxi Cui, Zongjian Li, Yang Yan, Bohua Chen, Li Yuan | cs.CL | null | null | cs.CL | 20230628 | 20230628 | [
{
"id": "2302.13971"
}
] |
2306.16564 | 25 | 7
Tabel 1 shows the performance of POLAR score for LLM error calibration. We provide two metrics, expected calibration error (ECE) and R2 between the POLAR score and the actual error rate, both calculated the same as in Figure 2. We report the results for four datasets, and three LLMs (GPT-4, GPT-3.5-turbo, and text-davinci-003). The following four methods serve as baseline comparison:
⢠Snorkel [27]: One of the most recommended weak supervision method [40] combining multiple supervision sources via matrix completion. Snorkel model fitted on training set is use to give class probabilities for LLM error rate.
⢠Majority vote: Another popular weak supervision method that estimates class probability according to the voted ratios among the LLM and the supervision functions.
LLM distilled: Fit a BERT (PubMedBERT) model to the LLM output on the task directly. Take the class probabilities from the finetuned BERT model to estimate LLM error rate. ⢠LLM ensemble: Query the LLM multiple times and estimate class probability from the response ensemble. As the approach is extremely expensive, we only implement for GPT- 3.5-turbo on CDR with resulting ECE = 0.4466, far from comparable to other approaches. | 2306.16564#25 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 26 | Figure 4: Pie charts of the distributions of the âlocationâ attribute for the NYT dataset.
5Studies of attribute biases on other datasets can be found in Appendix D.2.
7
To scale up the study of attribute bias, we leverage the dataset generated by AttrPrompt as a probe. In particular, we employ the attributes associated with each data of AttrPrompt to train an attribute classifier, which is in turn used to make attribute predictions on Gold and SimPrompt dataset. Note that the attribute values associated with each data of AttrPrompt is not necessary the ground truth, yet since ChatGPT has shown remarkable performance in following instructions [38], the generated data could decently reflect the desired attributes and therefore the attribute classifier trained with them could partially reveal the underlying attribute distribution of tested dataset, i.e., Gold and SimPrompt. In Appendix D.1, we justify the use of the attribute classifier by comparing the prediction of the attribute classifier and that of manual annotations.
(a) All (b) Tennis (c) Soccer (d) International Business Gold (e) All (f) Tennis (g) Soccer (h) International Business Europe Asia Africa North America South America Oceania SimPrompt (i) All (j) Tennis (k) Soccer (I) International Business AttrPrompt | 2306.15895#26 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16092 | 26 | 5: Initialize α for assigning weight to s. 6: for i to N do 7: 8: 9: 10: 11: end for 12: return T opK(S)
vi = ki for j to M do
||ki|| + α s
||s||
# sj ââ sj + cossim(vi, lj)
# end for
followed by methods such as Faiss [4] to calculate cosine similarity and extract the top k legal regulations related to user queries. However, this method often yields suboptimal results when the userâs question is vague. Therefore, we aim to extract key information from user queries and use the vector embedding of this information to design an algorithm to improve matching accuracy.
Due to the signiï¬cant advantages of large models in understanding user queries, we ï¬ne-tuned an LLM to extract the keywords from user queries. After obtaining multiple keywords, we adopted Algorithm 1 to retrieve relevant legal provisions. | 2306.16092#26 | ChatLaw: Open-Source Legal Large Language Model with Integrated External Knowledge Bases | Large Language Models (LLMs) have shown the potential to revolutionize
natural language processing tasks in various domains, sparking great interest
in vertical-specific large models. However, unlike proprietary models such as
BloombergGPT and FinGPT, which have leveraged their unique data accumulations
to make strides in the finance domain, there hasn't not many similar large
language models in the Chinese legal domain to facilitate its digital
transformation.
In this paper, we propose an open-source legal large language model named
ChatLaw. Due to the importance of data quality, we carefully designed a legal
domain fine-tuning dataset. Additionally, to overcome the problem of model
hallucinations in legal data screening during reference data retrieval, we
introduce a method that combines vector database retrieval with keyword
retrieval to effectively reduce the inaccuracy of relying solely on vector
database retrieval. Furthermore, we propose a self-attention method to enhance
the ability of large models to overcome errors present in reference data,
further optimizing the issue of model hallucinations at the model level and
improving the problem-solving capabilities of large models. We also
open-sourced our model and part of the data at
https://github.com/PKU-YuanGroup/ChatLaw. | http://arxiv.org/pdf/2306.16092 | Jiaxi Cui, Zongjian Li, Yang Yan, Bohua Chen, Li Yuan | cs.CL | null | null | cs.CL | 20230628 | 20230628 | [
{
"id": "2302.13971"
}
] |
2306.16564 | 26 | From Tabel 1 we can see that for all three LLMs across all four tasks, the proposed POLAR score consistently outperform all other methods. Among the baseline methods, Snorkel and LLM distilled model can achieve top or close-to-top performance in some cases under specific metric, but lack the consistency to deliver stable calibration for different LLMs on different tasks. In comparison, the proposed POLAR score is consistently well-calibrated to the true error rate.
Table 1: LLM error calibration using POLAR score, compared with baseline methods. The best entries from each row in terms of low ECE and high R2 are highlighted in bold. | 2306.16564#26 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 27 | Figure 5: Pie charts of the distributions of âlocationâ predicted by an attribute classifier for the NYT dataset. (a), (e), and (i) are âlocationâ distributions over the whole dataset, while others are for specific classes.
We visualize the distributions of the predicted âlocationâ in Figure 5. From the results, we can see that the âlocationâ distribution of the whole dataset (the first column of Figure 5) is similar to that of manual annotations (Figure 4). Regarding the âlocationâ distribution of specific classes, we can see that while the AttrPrompt still exhibits balanced distribution, the Gold and SimPrompt are biased towards continents other than âNorth Americaâ. In addition, for the class âtennisâ, the Gold dataset contains much more âNorth Americaâ than âOceaniaâ, while the SimPrompt, in contrast, demonstrates an opposite trend with a higher representation of âOceaniaâ than âNorth Americaâ. Such a noticeable disparity highlights the unpredictable nature of biases, potentially posing risks to models trained on such biased datasets.
# 6 Experiments on the Trained Models
# 6.1 Training with generated data | 2306.15895#27 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16092 | 27 | finput_ keywords laws BAAR R ASHER A, AB. A CaFAATAIK) (2012-12-28) 8+ â& APES RAB, RMAKEAD Fal, Fea, Wee. i MOAB PAV ERA RBA Mi RT B.. (BFK) (2012-12-28) 35 = +-C A Bae He Al = PALS A Ai, PTDAEBR... ANAAPUA RTE RAI. BEL CULT) (2009-12-26) B= +A MAA. FEE) EASHRARA Ie, â JUBSAL, WB. eR HR A Se PA AR... FMA ARATE CEBGPLA SA BESET BHA VE ES eA SR ice is FEL Fe ae er Tad a LE ) (2014-08-21) 35 N\A BEBRALA A ii ERAT A TST ABE XC, FT DAA EA BALE â TALE A. AE LVL... CRIME) (2020-05-28) 35 âF- STWR: EAE A PARAL, FLAME AA SEER. HEAP Fie LIE TAG, TAAL RGR. BG. ID Ch A EBERT HE I ed ea ae eat LAER TAR ASL. THIS. ER âPRIMA AUN) | 2306.16092#27 | ChatLaw: Open-Source Legal Large Language Model with Integrated External Knowledge Bases | Large Language Models (LLMs) have shown the potential to revolutionize
natural language processing tasks in various domains, sparking great interest
in vertical-specific large models. However, unlike proprietary models such as
BloombergGPT and FinGPT, which have leveraged their unique data accumulations
to make strides in the finance domain, there hasn't not many similar large
language models in the Chinese legal domain to facilitate its digital
transformation.
In this paper, we propose an open-source legal large language model named
ChatLaw. Due to the importance of data quality, we carefully designed a legal
domain fine-tuning dataset. Additionally, to overcome the problem of model
hallucinations in legal data screening during reference data retrieval, we
introduce a method that combines vector database retrieval with keyword
retrieval to effectively reduce the inaccuracy of relying solely on vector
database retrieval. Furthermore, we propose a self-attention method to enhance
the ability of large models to overcome errors present in reference data,
further optimizing the issue of model hallucinations at the model level and
improving the problem-solving capabilities of large models. We also
open-sourced our model and part of the data at
https://github.com/PKU-YuanGroup/ChatLaw. | http://arxiv.org/pdf/2306.16092 | Jiaxi Cui, Zongjian Li, Yang Yan, Bohua Chen, Li Yuan | cs.CL | null | null | cs.CL | 20230628 | 20230628 | [
{
"id": "2302.13971"
}
] |
2306.16564 | 27 | LLM (CDR) POLAR R2 ECE ECE Snorkel R2 Majority vote ECE R2 LLM Distilled ECE R2 GPT-4 GPT-3.5-turbo Text-davinci-003 0.0431 0.0463 0.0550 0.8898 0.9340 0.9071 0.1669 0.1642 0.1536 0.2990 0.3200 0.3705 0.1450 0.1817 0.1492 0.3482 0.5395 0.4499 0.1643 0.2216 0.0892 0.5918 0.0371 0.8964 LLM (ChemProt) ECE R2 ECE R2 ECE R2 ECE R2 GPT-4 GPT-3.5-turbo Text-davinci-003 0.0351 0.0481 0.0509 0.9340 0.9436 0.9170 0.1824 0.2283 0.2176 0.5102 0.6253 0.6995 0.2334 0.2822 0.2794 0.2439 0.0307 0.3068 0.2161 0.1590 0.1961 0.7663 0.8447 0.8248 LLM (SemEval) ECE | 2306.16564#27 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 28 | # 6 Experiments on the Trained Models
# 6.1 Training with generated data
We quantitatively evaluate the quality of generated datasets via the test performance of models trained with them. Apart from the AttrPrompt and the direct baseline SimPrompt, we include an additional baseline MetaPrompt [43] which leverage LLM to generate additional guidance information for improving upon SimPrompt. The details for MetaPrompt are shown in Appendix J. In addition, we use ChatGPT as a zero-shot predictor for comparison. The results are in Table 7. Besides the test performance, we include the cost of querying the ChatGPT per 1000 data in the table.
From the results, we can draw the following conclusions. First, the AttrPrompt consistently renders better performance compared to the SimPrompt with a margin of 6â10 points6. Second, the class- dependent attribute value filter (CAF) is beneficial since the AttrPrompt outperforms its variant
6In Appendix C.2 we show that simply increasing the temperature t for SimPrompt does not significantly improve its performance.
8 | 2306.15895#28 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16564 | 28 | 0.3068 0.2161 0.1590 0.1961 0.7663 0.8447 0.8248 LLM (SemEval) ECE R2 ECE R2 ECE R2 ECE R2 GPT-4 GPT-3.5-turbo Text-davinci-003 0.0792 0.0473 0.0665 0.9157 0.9631 0.9495 0.0683 0.1498 0.1186 0.7140 0.8212 0.7962 0.1145 0.2773 0.2417 0.3785 0.2081 0.3961 0.0627 0.1078 0.0647 0.9470 0.7571 0.9358 LLM (SMS) ECE R2 ECE R2 ECE R2 ECE R2 GPT-4 GPT-3.5-turbo Text-davinci-003 0.0144 0.0409 0.0229 0.9802 0.9627 0.9427 0.2435 0.0753 0.2006 0.0887 0.2021 0.0528 0.5882 0.1481 0.3250 0.0914 0.0062 0.0909 0.0127 0.1312 0.0328 0.9768 | 2306.16564#28 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 29 | 6In Appendix C.2 we show that simply increasing the temperature t for SimPrompt does not significantly improve its performance.
8
without CAF7. Third, out of the four datasets, the AttrPrompt outperforms the LLM zero-shot method on three datasets in terms of accuracy, while for the F1 score, the AttrPrompt surpasses the LLM zero-shot on all the datasets; combined with the observation that the LLM zero-shot inference incurs much higher costs compared to data generation and the fact that the generated data is re-usable for training any model, we argue that for topic text classification generating training data could be a better practice of leveraging LLM than direct zero-shot inference. Lastly, in most cases, the generated data underperform the original training set, indicating that there is still room for future improvement. We conduct further studies in Appendix C.3 to illustrate the performance over different classes.
Table 7: Performance of the models trained with created datasets and the cost of constructing the datasets. The results are averaged over five runs. The gain of AttrPrompt has passed the statistical test with p < 0.05. We also include the performance and cost of using LLM as a zero-shot predictor. | 2306.15895#29 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16092 | 29 | Figure 2: Result of Keyword LLM and Law LLM
5
# 3.3 Law LLM
We trained a BERT model using a dataset of 937k national case law examples to extract corresponding legal provisions and judicial interpretations from user queries. This Law LLM model forms an essential component of the ChatLaw product.
# 4 Experiment and Analysis
Evaluating the performance of the Large Language Model (LLM) has always been a challenge. For this purpose, we have collected national judicial examination questions over a decade and compiled a test dataset containing 2000 questions with their standard answers to measure the modelsâ ability to handle legal multiple-choice questions.
However, we found that the accuracy rates of the models are generally quite low. Under these circumstances, simply comparing accuracy rates seems to hold little signiï¬cance. Therefore, we have established an evaluation mechanism for model competition for Elo points, inspired by the matchmaking mechanism in e-sports and the design of Chatbot Arena [13], to more effectively assess the modelsâ abilities to handle legal multiple-choice questions. | 2306.16092#29 | ChatLaw: Open-Source Legal Large Language Model with Integrated External Knowledge Bases | Large Language Models (LLMs) have shown the potential to revolutionize
natural language processing tasks in various domains, sparking great interest
in vertical-specific large models. However, unlike proprietary models such as
BloombergGPT and FinGPT, which have leveraged their unique data accumulations
to make strides in the finance domain, there hasn't not many similar large
language models in the Chinese legal domain to facilitate its digital
transformation.
In this paper, we propose an open-source legal large language model named
ChatLaw. Due to the importance of data quality, we carefully designed a legal
domain fine-tuning dataset. Additionally, to overcome the problem of model
hallucinations in legal data screening during reference data retrieval, we
introduce a method that combines vector database retrieval with keyword
retrieval to effectively reduce the inaccuracy of relying solely on vector
database retrieval. Furthermore, we propose a self-attention method to enhance
the ability of large models to overcome errors present in reference data,
further optimizing the issue of model hallucinations at the model level and
improving the problem-solving capabilities of large models. We also
open-sourced our model and part of the data at
https://github.com/PKU-YuanGroup/ChatLaw. | http://arxiv.org/pdf/2306.16092 | Jiaxi Cui, Zongjian Li, Yang Yan, Bohua Chen, Li Yuan | cs.CL | null | null | cs.CL | 20230628 | 20230628 | [
{
"id": "2302.13971"
}
] |
2306.15895 | 30 | Method Acc. NYT F1 Price/1k Acc. Amazon F1 Price/1k Acc. Reddit F1 Price/1k Acc StackExchange F1 Price/1k LLM Zero-Shot 74.16 69.84 5.44 59.55 54.56 2.11 67.00 56.66 2.89 44.70 43.80 3.12 Gold SimPrompt MetaPrompt AttrPrompt w/o CAF AttrPrompt 83.80 75.47 79.58 80.40 81.30 81.02 76.22 79.83 80.92 82.26 â 0.76 0.87 0.91 1.05 82.23 57.34 56.35 61.67 66.08 81.12 56.96 55.98 61.57 65.65 â 0.77 0.84 0.82 0.87 84.22 53.48 54.61 61.22 63.33 83.38 53.81 54.30 60.18 63.10 â 0.65 0.74 0.72 0.84 67.56 42.88 44.81 45.90 48.99 63.28 41.30 44.02 44.84 47.42 â 0.69 0.83 0.81 0.90
# 6.2 Augmenting existing dataset with generated data | 2306.15895#30 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16092 | 30 | Win Rate Heatmap ChatLaw gpt-4 âlawyerllama gpt-3.5-turbo OpenLlaMA â LaWGPT = 0.50 051 Chatlaw 0.49 0.50 gpt-4 0.55 -0.50 -0.45 OpenLLaMA gpt-3.5-turbo lawyerllama 8 0.48 050 4s
Figure 3: ELO Ranking up until June 25 Figure 4: LLM Win Rate
ChatLaw(13B) 1733.85 gpt-4 1712.03 lawyer-llama(13B) â 1597.18 gpt-3.5-turbo 1573.35 OpenLLaMA(13B) 1475.55 LawGPT(7B) 1452.35
Through the analysis of the above experimental results, we can make the following observations:
(1) The introduction of legal-related Q&A and statute data can to some extent improve the modelâs performance on multiple-choice questions;
(2) The addition of speciï¬c task types for training signiï¬cantly improves the modelâs performance on such tasks. For example, the reason why the ChatLaw model outperforms GPT-4 is that we used a large number of multiple-choice questions as training data; | 2306.16092#30 | ChatLaw: Open-Source Legal Large Language Model with Integrated External Knowledge Bases | Large Language Models (LLMs) have shown the potential to revolutionize
natural language processing tasks in various domains, sparking great interest
in vertical-specific large models. However, unlike proprietary models such as
BloombergGPT and FinGPT, which have leveraged their unique data accumulations
to make strides in the finance domain, there hasn't not many similar large
language models in the Chinese legal domain to facilitate its digital
transformation.
In this paper, we propose an open-source legal large language model named
ChatLaw. Due to the importance of data quality, we carefully designed a legal
domain fine-tuning dataset. Additionally, to overcome the problem of model
hallucinations in legal data screening during reference data retrieval, we
introduce a method that combines vector database retrieval with keyword
retrieval to effectively reduce the inaccuracy of relying solely on vector
database retrieval. Furthermore, we propose a self-attention method to enhance
the ability of large models to overcome errors present in reference data,
further optimizing the issue of model hallucinations at the model level and
improving the problem-solving capabilities of large models. We also
open-sourced our model and part of the data at
https://github.com/PKU-YuanGroup/ChatLaw. | http://arxiv.org/pdf/2306.16092 | Jiaxi Cui, Zongjian Li, Yang Yan, Bohua Chen, Li Yuan | cs.CL | null | null | cs.CL | 20230628 | 20230628 | [
{
"id": "2302.13971"
}
] |
2306.16564 | 30 | # 4.2 LLM error correction with POLAR-assisted dynamic prompting
In this experiment we explore different dynamic prompting strategies in Section 3.4 to correct LLM errors. As an extension experiment, we only focus on the CDR dataset for GPT-4 and GPT-3.5-turbo. We sort the initial LLM responses by their POLAR score, and compute the error rate before and after dynamic prompting. Figure 3a shows that the GPT-4 error rate decreases significantly for both strategies, if and only if the POLAR score was high. Otherwise, re-prompting for self-examine or self-supervision can even increase error rate. Therefore, the best strategy is to only perform dynamic prompting when seeing a high POLAR score.
Figure 3b shows the implementation result of dynamic self-examination and dynamic self-supervision as in Algorithm 2. We choose the POLAR score threshold as δ = 0.5. We can see that the two
8 | 2306.16564#30 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 31 | # 6.2 Augmenting existing dataset with generated data
Here, we merge the generated dataset and the original training set into a single training set, and then test the model performance when it is trained with the merged dataset to see whether the generated dataset can further improve model performance with the original training set available. We present the results in Table 8. From the table, we can see that the generated dataset is an effective complement to the original training set, since most of the generated datasets introduce performance gain when combined with the original training set, especially our AttrPrompt which leads to improvement for all the cases. This notable improvement with simple dataset merge may motivate future studies of more advanced ways of using the generated data as augmentations to boost existing dataset.
Table 8: Performance of the model trained with the original training set/augmented with the generated dataset. We present the performance gain/drop compared to using the original training set in green/red. | 2306.15895#31 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16092 | 31 | (3) Legal multiple-choice questions require complex logical reasoning, so models with a larger number of parameters usually perform better.
6
# 5 Conclusions
In this paper, we proposed ChatLaw, a legal large language model(LLM) developed using legal domain knowledge. We propose a novel approach that combines LLM with vector knowledge databases, which signiï¬cantly alleviates the hallucination problem commonly seen in LLM. Our stable model handling strategies enable the resolution of various legal domain problems. Additionally, we release a dataset for legal multiple-choice questions and design an ELO model ranking mechanism.
However, our limitations arise due to the scale of the base model. Our performance in tasks such as logical reasoning and deduction is not optimal. Additionally, after incorporating a large amount of domain-speciï¬c data, further research is required to improve the generalization of ChatLaw for generic tasks. There are potential social risks on ChatLaw, and we advise users to make use of our method for proper purposes.
7
# References | 2306.16092#31 | ChatLaw: Open-Source Legal Large Language Model with Integrated External Knowledge Bases | Large Language Models (LLMs) have shown the potential to revolutionize
natural language processing tasks in various domains, sparking great interest
in vertical-specific large models. However, unlike proprietary models such as
BloombergGPT and FinGPT, which have leveraged their unique data accumulations
to make strides in the finance domain, there hasn't not many similar large
language models in the Chinese legal domain to facilitate its digital
transformation.
In this paper, we propose an open-source legal large language model named
ChatLaw. Due to the importance of data quality, we carefully designed a legal
domain fine-tuning dataset. Additionally, to overcome the problem of model
hallucinations in legal data screening during reference data retrieval, we
introduce a method that combines vector database retrieval with keyword
retrieval to effectively reduce the inaccuracy of relying solely on vector
database retrieval. Furthermore, we propose a self-attention method to enhance
the ability of large models to overcome errors present in reference data,
further optimizing the issue of model hallucinations at the model level and
improving the problem-solving capabilities of large models. We also
open-sourced our model and part of the data at
https://github.com/PKU-YuanGroup/ChatLaw. | http://arxiv.org/pdf/2306.16092 | Jiaxi Cui, Zongjian Li, Yang Yan, Bohua Chen, Li Yuan | cs.CL | null | null | cs.CL | 20230628 | 20230628 | [
{
"id": "2302.13971"
}
] |
2306.16564 | 31 | 8
strategies increase the performance for both GPT-3.5-turbo and GPT-4 models. With the dynamic self-supervision, GPT-3.5-turbo surpasses the best result so far that was achieved without gold labeled training data [40], and GPT-4 outperforms the state of the art supervised method utilizing fully-labeled data [36]. It is noteworthy that both strategies make use of zero gold-labeled examples.
(a) POLAR score and LLM error rate change (b) Dynamic prompting performance
Figure 3: (a) GPT-4 error rate before and after dynamic prompting, conditional on the initial POLAR score. (b) shows the performance improvement using the two dynamic prompting strategies.
# 5 Components for Calibration | 2306.16564#31 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 32 | Method Acc. NYT F1 Acc. Amazon F1 Acc. Reddit F1 StackExchange Acc F1 SimPrompt Metaprompt AttrPrompt w/o CAF AttrPrompt 85.56 +1.76 87.14 +3.34 85.71 +1.91 87.47 +3.67 86.34 +5.32 87.33 +6.31 87.18 +6.16 88.06 +7.04 81.85 -0.38 82.12 -0.11 82.24 +0.01 83.95 +1.72 80.23 -0.89 80.14 -0.98 80.76 -0.36 83.93 +2.81 85.11 +0.89 84.71 +0.49 85.86 +1.64 86.08 +1.86 84.88 +1.50 84.62 +1.24 85.65 +2.27 85.98 +2.60 74.53 +6.97 76.02 +8.46 75.16 +7.60 76.86 +9.30 74.23 +10.95 75.70 +12.42 74.64 +11.36 76.53 +13.25
# 6.3 The budget and sample efficiency of the generated data | 2306.15895#32 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16092 | 32 | 7
# References
[1] Ebtesam Almazrouei, Hamza Alobeidli, Abdulaziz Alshamsi, Alessandro Cappelli, Ruxandra Cojocaru, Merouane Debbah, Etienne Gofï¬net, Daniel Heslow, Julien Launay, Quentin Malartic, Badreddine Noune, Baptiste Pannier, and Guilherme Penedo. Falcon-40B: an open large language model with state-of-the-art performance. 2023.
[2] Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, March 2023.
[3] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. LoRA: Low-rank adaptation of large language models. In International Conference on Learning Representations, 2022. | 2306.16092#32 | ChatLaw: Open-Source Legal Large Language Model with Integrated External Knowledge Bases | Large Language Models (LLMs) have shown the potential to revolutionize
natural language processing tasks in various domains, sparking great interest
in vertical-specific large models. However, unlike proprietary models such as
BloombergGPT and FinGPT, which have leveraged their unique data accumulations
to make strides in the finance domain, there hasn't not many similar large
language models in the Chinese legal domain to facilitate its digital
transformation.
In this paper, we propose an open-source legal large language model named
ChatLaw. Due to the importance of data quality, we carefully designed a legal
domain fine-tuning dataset. Additionally, to overcome the problem of model
hallucinations in legal data screening during reference data retrieval, we
introduce a method that combines vector database retrieval with keyword
retrieval to effectively reduce the inaccuracy of relying solely on vector
database retrieval. Furthermore, we propose a self-attention method to enhance
the ability of large models to overcome errors present in reference data,
further optimizing the issue of model hallucinations at the model level and
improving the problem-solving capabilities of large models. We also
open-sourced our model and part of the data at
https://github.com/PKU-YuanGroup/ChatLaw. | http://arxiv.org/pdf/2306.16092 | Jiaxi Cui, Zongjian Li, Yang Yan, Bohua Chen, Li Yuan | cs.CL | null | null | cs.CL | 20230628 | 20230628 | [
{
"id": "2302.13971"
}
] |
2306.16564 | 32 | # 5 Components for Calibration
External sources of information The supervision functions serve as external sources of information in helping the harmonizer to be well-calibrated, and not biased by over-fitting to the LLM responses. To compare with the scenario where external sources of information is absent, we refer to the LLM distilled model columns in Table 1. We can see that without the supervision functions, the calibration ability is not consistently good due to overfitting to the LLM. Pareto loss scalarizer and harmonizer Table 2 shows the ECE and R2 measure for different loss scalarizer G, and different harmonizer model type, averaged over three LLMs on the four tasks. The nonlinear quadratic loss scalarizer paired with BERT finetuning gives the best calibration ability. For simpler models like MLP and LR, simple linear scalarizer works best. Also note that the Chebyshev scalarizer has the worst performance in almost all cases, which experimentally supports Theorem 1 that a Pareto loss scalarizer (Def. 2) is essential to approximate Pareto optimal harmonizer (Def. 1).
Table 2: Average calibration ability for different loss scalarizer G and harmonizer type. G function Quadratic R2 | 2306.16564#32 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 33 | # 6.3 The budget and sample efficiency of the generated data
Here, we aim to study two types of efficiency of the generated dataset, i.e., budget efficiency and sample efficiency, on the model performance. First, in Figure 6, we compare the budget efficiency of AttrPrompt against that of SimPrompt. Surprisingly, AttrPrompt only requires 5% of budget to be on par with or outperform SimPrompt with 100% of budget across all the datasets. This observation highlights the significance of diverse prompts in the training data generation process.
Secondly, we examine the sample efficiency of Gold, SimPrompt, and AttrPrompt in Figure 7. While both SimPrompt and AttrPrompt exhibit better sample efficiency than Gold in the low-data regime, with superior performance when the dataset size is relatively small, Gold data shows better sample efficiency in the high-data regime. Overall, AttrPrompt renders better sample efficiency than SimPrompt, which suggests that increasing the diversity of the prompts could be an effective way to improve the unsatisfactory data scaling trend of using LLM as data generator [56].
7Examples of the filtered attributes are exhibited in Appendix H.
9
(a) Amazon Budget (b) NYT Budget (c) Reddit Budget (d) StackExchange Budget
Figure 6: The comparisons on budget efficiency on four datasets. | 2306.15895#33 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16092 | 33 | [4] Jeff Johnson, Matthijs Douze, and Hervé Jégou. Billion-scale similarity search with GPUs. IEEE Transactions on Big Data, 7(3):535â547, 2019.
[5] OpenAI. Gpt-4 technical report, 2023.
[6] Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. Hugginggpt: Solving ai tasks with chatgpt and its friends in hugging face, 2023.
[7] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timo- thée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efï¬cient foundation language models. arXiv preprint arXiv:2302.13971, 2023.
[8] Haochun Wang, Chi Liu, Nuwa Xi, Zewen Qiang, Sendong Zhao, Bing Qin, and Ting Liu. Huatuo: Tuning llama model with chinese medical knowledge, 2023. | 2306.16092#33 | ChatLaw: Open-Source Legal Large Language Model with Integrated External Knowledge Bases | Large Language Models (LLMs) have shown the potential to revolutionize
natural language processing tasks in various domains, sparking great interest
in vertical-specific large models. However, unlike proprietary models such as
BloombergGPT and FinGPT, which have leveraged their unique data accumulations
to make strides in the finance domain, there hasn't not many similar large
language models in the Chinese legal domain to facilitate its digital
transformation.
In this paper, we propose an open-source legal large language model named
ChatLaw. Due to the importance of data quality, we carefully designed a legal
domain fine-tuning dataset. Additionally, to overcome the problem of model
hallucinations in legal data screening during reference data retrieval, we
introduce a method that combines vector database retrieval with keyword
retrieval to effectively reduce the inaccuracy of relying solely on vector
database retrieval. Furthermore, we propose a self-attention method to enhance
the ability of large models to overcome errors present in reference data,
further optimizing the issue of model hallucinations at the model level and
improving the problem-solving capabilities of large models. We also
open-sourced our model and part of the data at
https://github.com/PKU-YuanGroup/ChatLaw. | http://arxiv.org/pdf/2306.16092 | Jiaxi Cui, Zongjian Li, Yang Yan, Bohua Chen, Li Yuan | cs.CL | null | null | cs.CL | 20230628 | 20230628 | [
{
"id": "2302.13971"
}
] |
2306.16564 | 33 | Table 2: Average calibration ability for different loss scalarizer G and harmonizer type. G function Quadratic R2
Linear R2 Euclidean norm Chebyshev R2 R2 Harmonizer type ECE ECE ECE ECE BERT MLP LR 0.0625 0.0555 0.0641 0.9273 0.9392 0.9360 0.0458 0.0974 0.1072 0.9366 0.9188 0.9020 0.0549 0.0691 0.0766 0.9003 0.9302 0.9288 0.0711 0.0775 0.0948 0.8260 0.8934 0.8813
# 6 Conclusion
We propose a novel framework for LLM calibration using Pareto optimal self-supervision. Our theoretical results showed the calibration potential of distilled model from LLM, and the importance of incorporating independent weak supervision signals. The proposed Pareto optimal learning problem was showed to approximate Pareto optimality. Experimentally, the proposed POLAR score is consistently well-calibrated to the probability of LLM error. We introduce POLAR-based dynamic prompting to automatically correct LLM errors, which boosts a GPT-4 baseline performance to outperform the SOTA supervised model, without using any human-labeled training data.
9
# Reproducibility Statement | 2306.16564#33 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 34 | 9
(a) Amazon Budget (b) NYT Budget (c) Reddit Budget (d) StackExchange Budget
Figure 6: The comparisons on budget efficiency on four datasets.
(a) Amazon Data (b) NYT Data (c) Reddit Data (d) StackExchange Data
Figure 7: The comparisons on data efficiency on four datasets.
(a) Amazon (b) NYT (c) Reddit (d) StackExchange
Figure 8: The barplot of performance with LLM Generators of different parameter sizes. Note that due to budget limit, for GPT-4 model, the size of the generated dataset is only 10% of the full set thus the result is not directly comparable with other models.
# 6.4 The performance with respect to model parameter size | 2306.15895#34 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16092 | 34 | [9] Shijie Wu, Ozan Irsoy, Steven Lu, Vadim Dabravolski, Mark Dredze, Sebastian Gehrmann, Prabhanjan Kambadur, David Rosenberg, and Gideon Mann. Bloomberggpt: A large language model for ï¬nance, 2023.
[10] Hongyang Yang, Xiao-Yang Liu, and Christina Dan Wang. Fingpt: Open-source ï¬nancial large language models, 2023.
[11] Ping Yang, Junjie Wang, Ruyi Gan, Xinyu Zhu, Lin Zhang, Ziwei Wu, Xinyu Gao, Jiaxing Zhang, and Tetsuya Sakai. Zero-shot learners for natural language understanding via a uniï¬ed multiple choice perspective, 2022.
[12] Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang, Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu, Wendi Zheng, Xiao Xia, Weng Lam Tam, Zixuan Ma, Yufei Xue, Jidong Zhai, Wenguang Chen, Zhiyuan Liu, Peng Zhang, Yuxiao Dong, and Jie Tang. GLM-130b: An open bilingual pre-trained model. In The Eleventh International Conference on Learning Representations (ICLR), 2023. | 2306.16092#34 | ChatLaw: Open-Source Legal Large Language Model with Integrated External Knowledge Bases | Large Language Models (LLMs) have shown the potential to revolutionize
natural language processing tasks in various domains, sparking great interest
in vertical-specific large models. However, unlike proprietary models such as
BloombergGPT and FinGPT, which have leveraged their unique data accumulations
to make strides in the finance domain, there hasn't not many similar large
language models in the Chinese legal domain to facilitate its digital
transformation.
In this paper, we propose an open-source legal large language model named
ChatLaw. Due to the importance of data quality, we carefully designed a legal
domain fine-tuning dataset. Additionally, to overcome the problem of model
hallucinations in legal data screening during reference data retrieval, we
introduce a method that combines vector database retrieval with keyword
retrieval to effectively reduce the inaccuracy of relying solely on vector
database retrieval. Furthermore, we propose a self-attention method to enhance
the ability of large models to overcome errors present in reference data,
further optimizing the issue of model hallucinations at the model level and
improving the problem-solving capabilities of large models. We also
open-sourced our model and part of the data at
https://github.com/PKU-YuanGroup/ChatLaw. | http://arxiv.org/pdf/2306.16092 | Jiaxi Cui, Zongjian Li, Yang Yan, Bohua Chen, Li Yuan | cs.CL | null | null | cs.CL | 20230628 | 20230628 | [
{
"id": "2302.13971"
}
] |
2306.16564 | 34 | 9
# Reproducibility Statement
The proof of the theoretical results are in Appendix A. Implementation of our experiments are illustrated in Algorithms 1 and 2. The training details of the harmonizer model are listed in Appendix C. The prompts for querying the LLMs are described in Appendix D to reproduce the response. Anonymized code is provided in the supplementary material.
# References
[1] T. A. Almeida, J. M. G. Hidalgo, and A. Yamakami. Contributions to the study of sms spam filtering: new collection and results. In Proceedings of the 11th ACM symposium on Document engineering, pages 259â262, 2011.
[2] A. Awasthi, S. Ghosh, R. Goyal, and S. Sarawagi. Learning from rules generalizing labeled exemplars. arXiv preprint arXiv:2004.06025, 2020.
[3] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877â1901, 2020. | 2306.16564#34 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 35 | # 6.4 The performance with respect to model parameter size
Effect of the Model Size for LLM Generators. To study the effect of different LLMs on Attr- Prompt, we use other instruction-finetuned GPT models as the generator, namely text-ada-001 [38], text-babbage-001 [38], text-curie-001 [38], and GPT-4 [36] (due to budget constraints, we only generate a subset with 10% size of the original dataset). Under all settings, our model out- performs the direct baseline SimPrompt by a great margin. Besides, the performance is generally better with larger models, as they often have better instruction-following capabilities. In addition, an interesting finding is that for SimPrompt (but not for AttrPrompt), the average performance of using ChatGPT is worse than text-curie-001. This suggests that straightforward class-dependent prompts might not exploit the capabilities of LLMs as effectively as our proposed approaches. | 2306.15895#35 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16564 | 35 | [4] A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P. Barham, H. W. Chung, C. Sutton, S. Gehrmann, P. Schuh, K. Shi, S. Tsvyashchenko, J. Maynez, A. Rao, P. Barnes, Y. Tay, N. Shazeer, V. Prabhakaran, E. Reif, N. Du, B. Hutchinson, R. Pope, J. Bradbury, J. Austin, M. Isard, G. Gur-Ari, P. Yin, T. Duke, A. Levskaya, S. Ghemawat, S. Dev, H. Michalewski, X. Garcia, V. Misra, K. Robinson, L. Fedus, D. Zhou, D. Ippolito, D. Luan, H. Lim, B. Zoph, A. Spiridonov, R. Sepassi, D. Dohan, S. Agrawal, M. Omernick, A. M. Dai, T. S. Pillai, M. Pellat, A. Lewkowycz, E. Moreira, R. Child, | 2306.16564#35 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.